Tutorial: How to rate limit Python async API requests
With an example that performs 100 simultaneous requests to the Etherscan API
https://elnaril.hashnode.dev/how-to-rate-limit-python-async-requests-to-etherscan-and-other-apis
With an example that performs 100 simultaneous requests to the Etherscan API
https://elnaril.hashnode.dev/how-to-rate-limit-python-async-requests-to-etherscan-and-other-apis
Proposal for a Django project template
The author's take on what could be a project template for Django advanced usage, with modern tooling (for Python and UI dependencies, as well as configuration/environment management), but not too opinionated.
https://david.guillot.me/en/posts/tech/proposal-for-a-django-project-template/
The author's take on what could be a project template for Django advanced usage, with modern tooling (for Python and UI dependencies, as well as configuration/environment management), but not too opinionated.
https://david.guillot.me/en/posts/tech/proposal-for-a-django-project-template/
David Guillot
Proposal for a Django project template
My take on what could be a project template for Django advanced usage, with modern tooling (for Python and UI dependencies, as well as configuration/environment management), but not too opinionated.
CPython's Garbage Collector and Its Impact on Application Performance
https://blog.codingconfessions.com/p/connecting-cpythons-gc-internals
https://blog.codingconfessions.com/p/connecting-cpythons-gc-internals
Codingconfessions
CPython's Garbage Collector and its Impact on Application Performance
Learn how the knowledge of CPython internals translate into performance insights for your code
TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
https://github.com/NVIDIA/TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
https://github.com/NVIDIA/TransformerEngine
GitHub
GitHub - NVIDIA/TransformerEngine: A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating…
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Blackwell GPUs, to provide better performance with lower memory...
Everything I've learned so far about running local LLMs
A post about running large language models (LLMs) locally on a computer. It discusses what LLMs are and how to set them up to run on your own machine. The article also covers some of the limitations of LLMs, but highlights their potential for tasks like proofreading and creative writing.
https://nullprogram.com/blog/2024/11/10
A post about running large language models (LLMs) locally on a computer. It discusses what LLMs are and how to set them up to run on your own machine. The article also covers some of the limitations of LLMs, but highlights their potential for tasks like proofreading and creative writing.
https://nullprogram.com/blog/2024/11/10
Introducing DjangoVer
The article introduces DjangoVer, a versioning system for Django-related packages that aligns the package version with the latest supported Django feature release. It provides clarity on compatibility, signaling maintenance and compatibility status through the version number while addressing limitations of traditional versioning systems like Semantic Versioning.
https://www.b-list.org/weblog/2024/nov/18/djangover/
The article introduces DjangoVer, a versioning system for Django-related packages that aligns the package version with the latest supported Django feature release. It provides clarity on compatibility, signaling maintenance and compatibility status through the version number while addressing limitations of traditional versioning systems like Semantic Versioning.
https://www.b-list.org/weblog/2024/nov/18/djangover/
James Bennett
Introducing DjangoVer
Version numbering is hard, and there are lots of popular schemes out there for how to do it. Today I …
Is async django ready for prime time?
Explore async Django's readiness for production use, its benefits, challenges, and how AI workloads can leverage its capabilities effectively.
https://jonathanadly.com/is-async-django-ready-for-prime-time
Explore async Django's readiness for production use, its benefits, challenges, and how AI workloads can leverage its capabilities effectively.
https://jonathanadly.com/is-async-django-ready-for-prime-time
Jonathan's blog
Django Async: Ready for Prime Time?
Explore async Django's readiness for production use, its benefits, challenges, and how AI workloads can leverage its capabilities effectively
EasyAnimate
An End-to-End Solution for High-Resolution and Long Video Generation Based on Transformer Diffusion.
https://github.com/aigc-apps/EasyAnimate
An End-to-End Solution for High-Resolution and Long Video Generation Based on Transformer Diffusion.
https://github.com/aigc-apps/EasyAnimate
GitHub
GitHub - aigc-apps/EasyAnimate: 📺 An End-to-End Solution for High-Resolution and Long Video Generation Based on Transformer Diffusion
📺 An End-to-End Solution for High-Resolution and Long Video Generation Based on Transformer Diffusion - aigc-apps/EasyAnimate
Is Python Really That Slow?
The post explores Python's perceived slowness, highlighting that it stems from its interpreted nature and focus on developer productivity rather than raw performance. By leveraging tools like C extensions, async programming, or just-in-time compilers, developers can often overcome performance concerns effectively.
https://blog.miguelgrinberg.com/post/is-python-really-that-slow
The post explores Python's perceived slowness, highlighting that it stems from its interpreted nature and focus on developer productivity rather than raw performance. By leveraging tools like C extensions, async programming, or just-in-time compilers, developers can often overcome performance concerns effectively.
https://blog.miguelgrinberg.com/post/is-python-really-that-slow
Miguelgrinberg
Is Python Really That Slow?
My standard response when someone asks me how I deal with Python being such a slow language is that Python is by far the fastest to write, cleanest, more maintainable programming language I know, and…
Benchmark: DuckDB, Polars, Pandas, Arrow, SQLite, NanoCube on filtering / point queryies
https://www.reddit.com/r/Python/comments/1gyoi7n/benchmark_duckdb_polars_pandas_arrow_sqlite/
https://www.reddit.com/r/Python/comments/1gyoi7n/benchmark_duckdb_polars_pandas_arrow_sqlite/
Reddit
From the Python community on Reddit: Benchmark: DuckDB, Polars, Pandas, Arrow, SQLite, NanoCube on filtering / point queryies
Explore this post and more from the Python community
PacktPublishing / LLM-Engineers-Handbook
The LLM's practical guide: From the fundamentals to deploying advanced LLM and RAG apps to AWS using LLMOps best practices
https://github.com/PacktPublishing/LLM-Engineers-Handbook
The LLM's practical guide: From the fundamentals to deploying advanced LLM and RAG apps to AWS using LLMOps best practices
https://github.com/PacktPublishing/LLM-Engineers-Handbook
GitHub
GitHub - PacktPublishing/LLM-Engineers-Handbook: The LLM's practical guide: From the fundamentals to deploying advanced LLM and…
The LLM's practical guide: From the fundamentals to deploying advanced LLM and RAG apps to AWS using LLMOps best practices - PacktPublishing/LLM-Engineers-Handbook
garak
garak checks if an LLM can be made to fail in a way we don't want. garak probes for hallucination, data leakage, prompt injection, misinformation, toxicity generation, jailbreaks, and many other weaknesses.
https://github.com/NVIDIA/garak
garak checks if an LLM can be made to fail in a way we don't want. garak probes for hallucination, data leakage, prompt injection, misinformation, toxicity generation, jailbreaks, and many other weaknesses.
https://github.com/NVIDIA/garak
GitHub
GitHub - NVIDIA/garak: the LLM vulnerability scanner
the LLM vulnerability scanner. Contribute to NVIDIA/garak development by creating an account on GitHub.