Cosine Similarity and Text Embeddings In Python with OpenAI
The article discusses how to use cosine similarity to compare text embeddings, which are vector representations of text that capture semantic meaning, in order to determine the similarity between different text inputs. It provides example code for calculating cosine similarity between text embeddings generated using the OpenAI API.
https://earthly.dev/blog/cosine_similarity_text_embeddings/
The article discusses how to use cosine similarity to compare text embeddings, which are vector representations of text that capture semantic meaning, in order to determine the similarity between different text inputs. It provides example code for calculating cosine similarity between text embeddings generated using the OpenAI API.
https://earthly.dev/blog/cosine_similarity_text_embeddings/
Earthly Blog
Cosine Similarity and Text Embeddings In Python with OpenAI
<p>Okay, so I wanted to add related items to the sidebar on the Earthly Blog. Since we are approaching 500 blog posts, building this related ...
FreeAskInternet
FreeAskInternet is a completely free, private and locally running search aggregator & answer generate using LLM, without GPU needed. The user can ask a question and the system will make a multi engine search and combine the search result to the ChatGPT3.5 LLM and generate the answer based on search results.
https://github.com/nashsu/FreeAskInternet
FreeAskInternet is a completely free, private and locally running search aggregator & answer generate using LLM, without GPU needed. The user can ask a question and the system will make a multi engine search and combine the search result to the ChatGPT3.5 LLM and generate the answer based on search results.
https://github.com/nashsu/FreeAskInternet
GitHub
GitHub - nashsu/FreeAskInternet: FreeAskInternet is a completely free, PRIVATE and LOCALLY running search aggregator & answer generate…
FreeAskInternet is a completely free, PRIVATE and LOCALLY running search aggregator & answer generate using MULTI LLMs, without GPU needed. The user can ask a question and the system will ...
Running Python on a serverless GPU instance for machine learning inference
I was experimenting with some speech-to-text work using OpenAI’s Whisper models
...
https://saeedesmaili.com/posts/running-python-on-a-serverless-gpu-instance/
I was experimenting with some speech-to-text work using OpenAI’s Whisper models
...
https://saeedesmaili.com/posts/running-python-on-a-serverless-gpu-instance/
Saeed Esmaili
Running Python on a serverless GPU instance for machine learning inference
I was experimenting with some speech-to-text work using OpenAI’s Whisper models
today, and transcribing a 15-minute audio file with Whisper tiny model on AWS Lambda (3 vcpu) took 120 seconds. I was curious how faster this could be if I ran the same transcription…
today, and transcribing a 15-minute audio file with Whisper tiny model on AWS Lambda (3 vcpu) took 120 seconds. I was curious how faster this could be if I ran the same transcription…
❤1
Testing with Python (part 1): the basics
SummaryThis intro is about dumb test writing, as it's the necessary foundation to learn what comes ...
https://www.bitecode.dev/p/testing-with-python-part-1-the-basics
SummaryThis intro is about dumb test writing, as it's the necessary foundation to learn what comes ...
https://www.bitecode.dev/p/testing-with-python-part-1-the-basics
www.bitecode.dev
Testing with Python (part 1): the basics
Tautology, the masterclass
An unbiased evaluation of Python environment and packaging tools (2023)
https://alpopkes.com/posts/python/packaging_tools/
https://alpopkes.com/posts/python/packaging_tools/
Anna-Lena Popkes
An unbiased evaluation of environment management and packaging tools
Last update This post was last updated on August 29th, 2024.
Motivation When I started with Python and created my first package I was confused. Creating and managing a package seemed much harder than I expected. In addition, multiple tools existed and I wasn’t…
Motivation When I started with Python and created my first package I was confused. Creating and managing a package seemed much harder than I expected. In addition, multiple tools existed and I wasn’t…
Develop an Asyncio Echo Client and Server
You can develop an echo client and server using asyncio connections and streams. An echo server ...
https://superfastpython.com/asyncio-echo-client-server/
You can develop an echo client and server using asyncio connections and streams. An echo server ...
https://superfastpython.com/asyncio-echo-client-server/
Super Fast Python
Develop an Asyncio Echo Client and Server - Super Fast Python
You can develop an echo client and server using asyncio connections and streams. An echo server accepts client connections that send a message and reply with the same message, in turn, echoing it back. Developing an echo client and server is a common exercise…
👍1
Leibniz formula for π in Python, JavaScript, and Ruby
Different ways to calculate the value of π using the Leibniz ...
https://www.peterbe.com/plog/leibniz-formula-for-pi
Different ways to calculate the value of π using the Leibniz ...
https://www.peterbe.com/plog/leibniz-formula-for-pi
Peterbe
Leibniz formula for π in Python, JavaScript, and Ruby - Peterbe.com
Different ways to calculate the value of π using the Leibniz formula
Hashquery
A Python framework for defining and querying BI models in your data warehouse.
https://github.com/hashboard-hq/hashquery
A Python framework for defining and querying BI models in your data warehouse.
https://github.com/hashboard-hq/hashquery
GitHub
GitHub - hashboard-hq/hashquery: A Python framework for defining and querying BI models in your data warehouse
A Python framework for defining and querying BI models in your data warehouse - hashboard-hq/hashquery
llama3
This release includes model weights and starting code for pre-trained and instruction tuned Llama 3 language models — including sizes of 8B to 70B parameters.
https://github.com/meta-llama/llama3
This release includes model weights and starting code for pre-trained and instruction tuned Llama 3 language models — including sizes of 8B to 70B parameters.
https://github.com/meta-llama/llama3
GitHub
GitHub - meta-llama/llama3: The official Meta Llama 3 GitHub site
The official Meta Llama 3 GitHub site. Contribute to meta-llama/llama3 development by creating an account on GitHub.
fudan-generative-vision / champ
Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance
https://github.com/fudan-generative-vision/champ
Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance
https://github.com/fudan-generative-vision/champ
GitHub
GitHub - fudan-generative-vision/champ: [ECCV 2024] Champ: Controllable and Consistent Human Image Animation with 3D Parametric…
[ECCV 2024] Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance - fudan-generative-vision/champ
PyTorch 2.3
PyTorch 2.3 offers support for user-defined Triton kernels in torch.compile, allowing for users to migrate their own Triton kernels from eager without experiencing performance regressions or graph breaks. Tensor Parallelism improves the experience for training Large Language Models using native PyTorch functions, which has been validated on training runs for 100B parameter models. As wel...
https://pytorch.org/blog/pytorch2-3/
PyTorch 2.3 offers support for user-defined Triton kernels in torch.compile, allowing for users to migrate their own Triton kernels from eager without experiencing performance regressions or graph breaks. Tensor Parallelism improves the experience for training Large Language Models using native PyTorch functions, which has been validated on training runs for 100B parameter models. As wel...
https://pytorch.org/blog/pytorch2-3/
PyTorch
PyTorch 2.3 Release Blog
We are excited to announce the release of PyTorch® 2.3 (release note)! PyTorch 2.3 offers support for user-defined Triton kernels in torch.compile, allowing for users to migrate their own Triton kernels from eager without experiencing performance regressions…
Llama 3: Get building with LLMs in 5 minutes
Get started building transformative AI-powered features within 5 minutes using Llama 3, Ollama, and Python.
https://www.denoise.digital/llama-3-get-started-with-llms/
Get started building transformative AI-powered features within 5 minutes using Llama 3, Ollama, and Python.
https://www.denoise.digital/llama-3-get-started-with-llms/
Denoise Digital: Decoding Tech Trends for Success
Llama 3: Get building with LLMs in 5 minutes
Get started building transformative AI-powered features within 5 minutes using Llama 3, Ollama, and Python.
Building a Voice Notes App with Django and OpenAI
We'll build a voice notes app that uses OpenAI to perform speech to text. As a bonus, we'll use AlpineJS to manage state on the frontend.
https://circumeo.io/blog/entry/building-a-voice-notes-app-with-django-and-openai/
We'll build a voice notes app that uses OpenAI to perform speech to text. As a bonus, we'll use AlpineJS to manage state on the frontend.
https://circumeo.io/blog/entry/building-a-voice-notes-app-with-django-and-openai/
www.circumeo.io
Building a Voice Notes App with Django and OpenAI
We'll build a voice notes app that uses OpenAI to perform speech to text. As a bonus, we'll use AlpineJS to manage state on the frontend.