Analyzing "Sorting a million 32-bit integers in 2MB of RAM using Python"
SummaryWe are going to revisit Guido's famous "Sorting a million 32-bit integers in 2MB of RAM ...
https://www.bitecode.dev/p/analyzing-sorting-a-million-32-bit
SummaryWe are going to revisit Guido's famous "Sorting a million 32-bit integers in 2MB of RAM ...
https://www.bitecode.dev/p/analyzing-sorting-a-million-32-bit
www.bitecode.dev
Analyzing "Sorting a million 32-bit integers in 2MB of RAM using Python"
2MB ought to be enough for anybody
Doubiiu / DynamiCrafter
DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors
https://github.com/Doubiiu/DynamiCrafter
DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors
https://github.com/Doubiiu/DynamiCrafter
GitHub
GitHub - Doubiiu/DynamiCrafter: [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors
[ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors - Doubiiu/DynamiCrafter
EvalPlus
EvalPlus for rigourous evaluation of LLM-synthesized code.
https://github.com/evalplus/evalplus
EvalPlus for rigourous evaluation of LLM-synthesized code.
https://github.com/evalplus/evalplus
GitHub
GitHub - evalplus/evalplus: Rigourous evaluation of LLM-synthesized code - NeurIPS 2023 & COLM 2024
Rigourous evaluation of LLM-synthesized code - NeurIPS 2023 & COLM 2024 - evalplus/evalplus
Using LLMs to Generate Fuzz Generators
The post explores the effectiveness of Large Language Models (LLMs) in generating fuzz drivers for library API fuzzing. It discusses the challenges and benefits of LLM-based fuzz driver generation, highlighting its practicality, strategies for complex API usage, and areas for improvement based on a comprehensive study and evaluation.
https://verse.systems/blog/post/2024-03-09-using-llms-to-generate-fuzz-generators
The post explores the effectiveness of Large Language Models (LLMs) in generating fuzz drivers for library API fuzzing. It discusses the challenges and benefits of LLM-based fuzz driver generation, highlighting its practicality, strategies for complex API usage, and areas for improvement based on a comprehensive study and evaluation.
https://verse.systems/blog/post/2024-03-09-using-llms-to-generate-fuzz-generators
Toby's Blog
Using LLMs to Generate Fuzz Generators
LLMs seem surprisingly good at many things. So much so that not a week goes by without someone coming up with yet another use-case for this technology, often to solve tasks quickly that traditionally …
GGUF, the long way around
This is an article about GGUF, a file format used for machine learning models. It discusses what machine learning models are and how they are produced.
https://vickiboykis.com/2024/02/28/gguf-the-long-way-around/
This is an article about GGUF, a file format used for machine learning models. It discusses what machine learning models are and how they are produced.
https://vickiboykis.com/2024/02/28/gguf-the-long-way-around/
★❤✰ Vicki Boykis ★❤✰
GGUF, the long way around
What are ML artifacts?
openllmetry
Open-source observability for your LLM application.
https://github.com/traceloop/openllmetry
Open-source observability for your LLM application.
https://github.com/traceloop/openllmetry
GitHub
GitHub - traceloop/openllmetry: Open-source observability for your LLM application, based on OpenTelemetry
Open-source observability for your LLM application, based on OpenTelemetry - traceloop/openllmetry
Create A Machine Learning Powered NCAA Bracket
Dive into the fascinating world of machine learning and AI as we guide you through developing a model designed to predict NCAA tournament outcomes. From initial setup to final predictions, we’ll cover everything you need to create your own powerhouse model.
https://www.youtube.com/watch?v=cHtAEWkvSMU
Dive into the fascinating world of machine learning and AI as we guide you through developing a model designed to predict NCAA tournament outcomes. From initial setup to final predictions, we’ll cover everything you need to create your own powerhouse model.
https://www.youtube.com/watch?v=cHtAEWkvSMU
YouTube
Making Sports Predictions with Data Science
🏆 WIN a NVIDIA GeForce RTX 4080 Super GPU! Register now: https://forms.gle/47vUHzzz2aqJoP1S9
Dive into the fascinating world of machine learning and AI as we guide you through developing a model designed to predict NCAA tournament outcomes. From initial…
Dive into the fascinating world of machine learning and AI as we guide you through developing a model designed to predict NCAA tournament outcomes. From initial…
We Hacked Google A.I. for $50,000
This article discusses the author's experience of participating in a hacking event in Las Vegas where vulnerabilities were discovered, leading to the successful hacking of Google. Despite the initial achievement, the Google VRP team extended the competition deadline to encourage more creative findings, highlighting the ongoing challenges and opportunities in the realm of cybersecurity
https://www.landh.tech/blog/20240304-google-hack-50000
This article discusses the author's experience of participating in a hacking event in Las Vegas where vulnerabilities were discovered, leading to the successful hacking of Google. Despite the initial achievement, the Google VRP team extended the competition deadline to encourage more creative findings, highlighting the ongoing challenges and opportunities in the realm of cybersecurity
https://www.landh.tech/blog/20240304-google-hack-50000
Large Language Models On-Device with MediaPipe and TensorFlow Lite
The article discusses the release of the experimental MediaPipe LLM Inference API, enabling Large Language Models (LLMs) to run fully on-device across platforms. This transformative capability addresses the significant memory and compute demands of LLMs, which are over a hundred times larger than traditional on-device models, achieved through optimizations like new ops, quantization, cac...
https://developers.googleblog.com/2024/03/running-large-language-models-on-device-with-mediapipe-andtensorflow-lite.html
The article discusses the release of the experimental MediaPipe LLM Inference API, enabling Large Language Models (LLMs) to run fully on-device across platforms. This transformative capability addresses the significant memory and compute demands of LLMs, which are over a hundred times larger than traditional on-device models, achieved through optimizations like new ops, quantization, cac...
https://developers.googleblog.com/2024/03/running-large-language-models-on-device-with-mediapipe-andtensorflow-lite.html
Googleblog
Google for Developers Blog - News about Web, Mobile, AI and Cloud
Test out the MediaPipe LLM Inference API via our web demo. The Web SDK will be released in the next few weeks with the iOS SDK coming soon.
Python Gevent in practice: common pitfalls to keep in mind
Learn more about the common pitfalls of using the asynchronous Python library, Gevent, and how to resolve them in this article.
https://upsun.com/blog/python-gevent-best-practices/
Learn more about the common pitfalls of using the asynchronous Python library, Gevent, and how to resolve them in this article.
https://upsun.com/blog/python-gevent-best-practices/
Upsun
Gevent in practice: Common pitfalls to keep in mind | Upsun
Learn more about the common pitfalls of using the asynchronous Python library, Gevent, and how to resolve them in this article.
Speed up Django’s collectstatic command with Collectfasta
The post introduces Collectfasta, an updated fork of Collectfast designed to enhance the performance of Django's collectstatic command. By optimizing the repository and improving performance, Collectfasta offers faster execution and efficiency compared to the standard Django command, providing a valuable tool for developers seeking enhanced performance in their Django projects.
https://jasongi.com/2024/03/04/speed-up-djangos-collectstatic-command-with-collectfasta/
The post introduces Collectfasta, an updated fork of Collectfast designed to enhance the performance of Django's collectstatic command. By optimizing the repository and improving performance, Collectfasta offers faster execution and efficiency compared to the standard Django command, providing a valuable tool for developers seeking enhanced performance in their Django projects.
https://jasongi.com/2024/03/04/speed-up-djangos-collectstatic-command-with-collectfasta/
JasonGi
Speed up Django's collectstatic command with Collectfasta - JasonGi
Django’s collectstatic command (added in Django 1.3 – March 23, 2011) was designed for storage backends where file retrieval was cheap because it was on your local disk. In Django 1.4 (March 23, 2012) Django introduced CachedStaticFilesStorage which would…
Understanding Context Manager and its Syntastic Sugar
https://bjoernricks.github.io/posts/python/context-manager/
https://bjoernricks.github.io/posts/python/context-manager/
bjoernricks.github.io
Björn Ricks
A personal Blog.
chedule-texts-from-txt
Schedule iMessage or SMS texts from .txt files.
https://github.com/reidjs/schedule-texts-from-txt
Schedule iMessage or SMS texts from .txt files.
https://github.com/reidjs/schedule-texts-from-txt
GitHub
GitHub - reidjs/text-scheduler: Schedule iMessage or SMS texts from .txt files.
Schedule iMessage or SMS texts from .txt files. Contribute to reidjs/text-scheduler development by creating an account on GitHub.