Running Python on a serverless GPU instance for machine learning inference
I was experimenting with some speech-to-text work using OpenAI’s Whisper models
...
https://saeedesmaili.com/posts/running-python-on-a-serverless-gpu-instance/
I was experimenting with some speech-to-text work using OpenAI’s Whisper models
...
https://saeedesmaili.com/posts/running-python-on-a-serverless-gpu-instance/
Saeed Esmaili
Running Python on a serverless GPU instance for machine learning inference
I was experimenting with some speech-to-text work using OpenAI’s Whisper models
today, and transcribing a 15-minute audio file with Whisper tiny model on AWS Lambda (3 vcpu) took 120 seconds. I was curious how faster this could be if I ran the same transcription…
today, and transcribing a 15-minute audio file with Whisper tiny model on AWS Lambda (3 vcpu) took 120 seconds. I was curious how faster this could be if I ran the same transcription…
❤1
Testing with Python (part 1): the basics
SummaryThis intro is about dumb test writing, as it's the necessary foundation to learn what comes ...
https://www.bitecode.dev/p/testing-with-python-part-1-the-basics
SummaryThis intro is about dumb test writing, as it's the necessary foundation to learn what comes ...
https://www.bitecode.dev/p/testing-with-python-part-1-the-basics
www.bitecode.dev
Testing with Python (part 1): the basics
Tautology, the masterclass
An unbiased evaluation of Python environment and packaging tools (2023)
https://alpopkes.com/posts/python/packaging_tools/
https://alpopkes.com/posts/python/packaging_tools/
Anna-Lena Popkes
An unbiased evaluation of environment management and packaging tools
Last update This post was last updated on August 29th, 2024.
Motivation When I started with Python and created my first package I was confused. Creating and managing a package seemed much harder than I expected. In addition, multiple tools existed and I wasn’t…
Motivation When I started with Python and created my first package I was confused. Creating and managing a package seemed much harder than I expected. In addition, multiple tools existed and I wasn’t…
Develop an Asyncio Echo Client and Server
You can develop an echo client and server using asyncio connections and streams. An echo server ...
https://superfastpython.com/asyncio-echo-client-server/
You can develop an echo client and server using asyncio connections and streams. An echo server ...
https://superfastpython.com/asyncio-echo-client-server/
Super Fast Python
Develop an Asyncio Echo Client and Server - Super Fast Python
You can develop an echo client and server using asyncio connections and streams. An echo server accepts client connections that send a message and reply with the same message, in turn, echoing it back. Developing an echo client and server is a common exercise…
👍1
Leibniz formula for π in Python, JavaScript, and Ruby
Different ways to calculate the value of π using the Leibniz ...
https://www.peterbe.com/plog/leibniz-formula-for-pi
Different ways to calculate the value of π using the Leibniz ...
https://www.peterbe.com/plog/leibniz-formula-for-pi
Peterbe
Leibniz formula for π in Python, JavaScript, and Ruby - Peterbe.com
Different ways to calculate the value of π using the Leibniz formula
Hashquery
A Python framework for defining and querying BI models in your data warehouse.
https://github.com/hashboard-hq/hashquery
A Python framework for defining and querying BI models in your data warehouse.
https://github.com/hashboard-hq/hashquery
GitHub
GitHub - hashboard-hq/hashquery: A Python framework for defining and querying BI models in your data warehouse
A Python framework for defining and querying BI models in your data warehouse - hashboard-hq/hashquery
llama3
This release includes model weights and starting code for pre-trained and instruction tuned Llama 3 language models — including sizes of 8B to 70B parameters.
https://github.com/meta-llama/llama3
This release includes model weights and starting code for pre-trained and instruction tuned Llama 3 language models — including sizes of 8B to 70B parameters.
https://github.com/meta-llama/llama3
GitHub
GitHub - meta-llama/llama3: The official Meta Llama 3 GitHub site
The official Meta Llama 3 GitHub site. Contribute to meta-llama/llama3 development by creating an account on GitHub.
fudan-generative-vision / champ
Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance
https://github.com/fudan-generative-vision/champ
Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance
https://github.com/fudan-generative-vision/champ
GitHub
GitHub - fudan-generative-vision/champ: [ECCV 2024] Champ: Controllable and Consistent Human Image Animation with 3D Parametric…
[ECCV 2024] Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance - fudan-generative-vision/champ
PyTorch 2.3
PyTorch 2.3 offers support for user-defined Triton kernels in torch.compile, allowing for users to migrate their own Triton kernels from eager without experiencing performance regressions or graph breaks. Tensor Parallelism improves the experience for training Large Language Models using native PyTorch functions, which has been validated on training runs for 100B parameter models. As wel...
https://pytorch.org/blog/pytorch2-3/
PyTorch 2.3 offers support for user-defined Triton kernels in torch.compile, allowing for users to migrate their own Triton kernels from eager without experiencing performance regressions or graph breaks. Tensor Parallelism improves the experience for training Large Language Models using native PyTorch functions, which has been validated on training runs for 100B parameter models. As wel...
https://pytorch.org/blog/pytorch2-3/
PyTorch
PyTorch 2.3 Release Blog
We are excited to announce the release of PyTorch® 2.3 (release note)! PyTorch 2.3 offers support for user-defined Triton kernels in torch.compile, allowing for users to migrate their own Triton kernels from eager without experiencing performance regressions…
Llama 3: Get building with LLMs in 5 minutes
Get started building transformative AI-powered features within 5 minutes using Llama 3, Ollama, and Python.
https://www.denoise.digital/llama-3-get-started-with-llms/
Get started building transformative AI-powered features within 5 minutes using Llama 3, Ollama, and Python.
https://www.denoise.digital/llama-3-get-started-with-llms/
Denoise Digital: Decoding Tech Trends for Success
Llama 3: Get building with LLMs in 5 minutes
Get started building transformative AI-powered features within 5 minutes using Llama 3, Ollama, and Python.
Building a Voice Notes App with Django and OpenAI
We'll build a voice notes app that uses OpenAI to perform speech to text. As a bonus, we'll use AlpineJS to manage state on the frontend.
https://circumeo.io/blog/entry/building-a-voice-notes-app-with-django-and-openai/
We'll build a voice notes app that uses OpenAI to perform speech to text. As a bonus, we'll use AlpineJS to manage state on the frontend.
https://circumeo.io/blog/entry/building-a-voice-notes-app-with-django-and-openai/
www.circumeo.io
Building a Voice Notes App with Django and OpenAI
We'll build a voice notes app that uses OpenAI to perform speech to text. As a bonus, we'll use AlpineJS to manage state on the frontend.
Run Llama 3 locally using Ollama and LlamaEdge
Meta has unveiled Llama3, and now you can run it locally using Ollama. In this video, I explain how to use Ollama to operate various language models, specifically focusing on Llama2 and Llama3. I'll also guide you through the WebUI for the project, demonstrating how to serve models with Ollama and interact with them using Python.
https://www.youtube.com/watch?v=wPuoMaD_SnY
Meta has unveiled Llama3, and now you can run it locally using Ollama. In this video, I explain how to use Ollama to operate various language models, specifically focusing on Llama2 and Llama3. I'll also guide you through the WebUI for the project, demonstrating how to serve models with Ollama and interact with them using Python.
https://www.youtube.com/watch?v=wPuoMaD_SnY
YouTube
Run Llama 3 locally using Ollama and LlamaEdge
Note - As an AI model running locally it does not have direct access to the internet, although you can use python or any other language to access data and then pass on the information to the model. But in the demo it mimics the URL that I provided and tries…
InstructLab
Command-line interface. Use this to chat with the model or train the model (training consumes the taxonomy data)
https://github.com/instructlab/instructlab
Command-line interface. Use this to chat with the model or train the model (training consumes the taxonomy data)
https://github.com/instructlab/instructlab
GitHub
GitHub - instructlab/instructlab: InstructLab Core package. Use this to chat with a model and execute the InstructLab workflow…
InstructLab Core package. Use this to chat with a model and execute the InstructLab workflow to train a model using custom taxonomy data. - instructlab/instructlab