https://www.arxiv-vanity.com/
> arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF.
WOW! 😳
> arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF.
WOW! 😳
🤯1
https://buttondown.email/hillelwayne/archive/why-you-should-read-data-and-reality/
Эта фраза точечно объясняет, почему ООП по факту провалилось, как и примерно все остальные "методологии программирования" или проектирования.
Реальность многообразнее любой модели — это первое, что мы забываем, и оно же возвращается бумерангом чтобы хлопнуть нас по затылку в самый ответственный момент.
Ссылка на книгу внутри поста.
#free #book #modeling
Once more: we are not modeling reality, but the way information about reality is processed, by people. — Bill Kent
Эта фраза точечно объясняет, почему ООП по факту провалилось, как и примерно все остальные "методологии программирования" или проектирования.
Реальность многообразнее любой модели — это первое, что мы забываем, и оно же возвращается бумерангом чтобы хлопнуть нас по затылку в самый ответственный момент.
Ссылка на книгу внутри поста.
#free #book #modeling
Buttondown
Why You Should Read "Data and Reality"
Once more: we are not modeling reality, but the way information about reality is processed, by people. — Bill Kent I've got this working theory that you can...
👍3
https://arxiv.org/pdf/1809.02161.pdf
"Future Directions for Optimizing Compilers", Nuno P. Lopes, John Regehr
Among other things the paper gives an overview of Solvers, Synthesis, and Superoptimizers, their applicability and challenges to compilers.
"Future Directions for Optimizing Compilers", Nuno P. Lopes, John Regehr
As software becomes larger, programming languages become higher-level, and processors continue to fail to be clocked faster, we’ll increasingly require compilers to reduce code bloat, eliminate abstraction penalties, and exploit interesting instruction sets. At the same time, compiler execution time must not increase too much and also compilers should never produce the wrong output. This paper examines the problem of making optimizing compilers faster, less buggy, and more capable of generating high-quality output.
Among other things the paper gives an overview of Solvers, Synthesis, and Superoptimizers, their applicability and challenges to compilers.
👍3
https://driesdepoorter.be/thefollower/
Citizen surveillance. As in random weirdo surveillance over whoever got unlucky. 😒
Citizen surveillance. As in random weirdo surveillance over whoever got unlucky. 😒
Dries Depoorter
The Follower, 2023-2025
Using open cameras and AI to find how an Instagram photo is taken.
Project by Dries Depoorter.
Project by Dries Depoorter.
120. Adapting old programs to fit new machines usually means adapting new machines to behave like old ones.
Alan J. Perlis (http://pu.inf.uni-tuebingen.de/users/klaeren/epigrams.html)
Эта фраза точечно объясняет развитие компьютеров за последние 50 лет.
https://lemire.me/blog/2019/10/16/benchmarking-is-hard-processors-learn-to-predict-branches/
Dang! CPUs really do learn to predict branches. And learn fast!
If you're trying to benchmark how your code handles "cold" (fresh) data it will screw your tests real good.
Dang! CPUs really do learn to predict branches. And learn fast!
If you're trying to benchmark how your code handles "cold" (fresh) data it will screw your tests real good.
👍2
https://nitter.it/Bertrand_Meyer/status/1575216523689754624
Bertrand Meyer resurrected and published for #free his 1991 #book "Introduction to the Theory of Programming Languages".
I have no idea if it's any good.
Bertrand Meyer resurrected and published for #free his 1991 #book "Introduction to the Theory of Programming Languages".
I have no idea if it's any good.
Nitter
Bertrand Meyer (@Bertrand_Meyer)
I was able to reconstruct, and make available as a free PDF, my 1991 book "Introduction to the Theory of Programming Languages". Of course it would be written differently today but I think it is still useful as a presentation of program semantics. See ht…
🤔1
https://www.linuxjournal.com/content/sqlite-secrecy-management-tools-and-methods
Well I guess we all are much safer now... 😏
An Informix database was running under HP-UX on the U.S. battleship DDG-79 Oscar Austin, and during ship power losses, the database would not always restart without maintenance, presenting physical risks for the crew. SQLite is an answer to that danger; when used properly, it will transparently recover from such crashes.
Well I guess we all are much safer now... 😏
https://sciml.ai/news/2022/09/21/compile_time/
"How Julia ODE Solve Compile Time Was Reduced From 30 Seconds to 0.1"
And a side-cool-story of "we've replaced OpenBLAS LU-factorization with pure Julia implementation and outperformed Intel MKL thanks to JuliaSIMD ecosystem".
But mainly the post is a showcase for using
What I like best in this story is Julia community building tools to address problems it faces. As the saying goes "you can't optimize what you can't measure" so Tim Holy builds
Virtually no patching of Julia interpreter/compiler itself was needed for this speedup, apart from general ongoing work on precompilation caching that promises even wider scope in the upcoming Julia 1.9 and benefits all the packages regardless. The rest is user-space, both on the package author and user side. Even changes to Base library as anybody can suggest a Pull Request that improves performance even further.
"How Julia ODE Solve Compile Time Was Reduced From 30 Seconds to 0.1"
And a side-cool-story of "we've replaced OpenBLAS LU-factorization with pure Julia implementation and outperformed Intel MKL thanks to JuliaSIMD ecosystem".
But mainly the post is a showcase for using
SnoopCompile.jl, SnoopPrecompile.jl, FunctionWrappers.jl (and even FunctionWrappersWrappers.jl) plus some sensible user-level refactorings (separating type declarations from function implementations mainly) to drastically improve precompilation speed and efficiency. And then building tight System Images with PackageCompiler.jl. 😊What I like best in this story is Julia community building tools to address problems it faces. As the saying goes "you can't optimize what you can't measure" so Tim Holy builds
SnoopCompile.jl. And all of that stays user-level, meaning you can build your own tooling if existing doesn't cover your needs.Virtually no patching of Julia interpreter/compiler itself was needed for this speedup, apart from general ongoing work on precompilation caching that promises even wider scope in the upcoming Julia 1.9 and benefits all the packages regardless. The rest is user-space, both on the package author and user side. Even changes to Base library as anybody can suggest a Pull Request that improves performance even further.
sciml.ai
Open Source Software for Scientific Machine Learning
🔥2👏2🤯1
In other news. OpenBSD project released a new version control system implemented on top of (and thus fully compatible with) Git repository format: https://gameoftrees.org/
I for one have no idea who needs this apart from OpenBSD developers. Windows naturally is not supported (though MacOS is).
I for one have no idea who needs this apart from OpenBSD developers. Windows naturally is not supported (though MacOS is).
gameoftrees.org
Game of Trees
the main Game of Trees page
https://theconversation.com/shifting-ocean-currents-are-pushing-more-and-more-heat-into-the-southern-hemispheres-cooler-waters-189122
Everyone knows fluid dynamics is a bitch. On the global scale it becomes even funnier. But not once shit will hit the fan for reals...
Everyone knows fluid dynamics is a bitch. On the global scale it becomes even funnier. But not once shit will hit the fan for reals...
The Conversation
Shifting ocean currents are pushing more and more heat into the Southern Hemisphere’s cooler waters
Our oceans have absorbed almost all the extra heat we’ve trapped with our emissions. Now we know how this heat moves in ocean currents.
https://www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor
If you strip all the nuances DeepMind found a way to represent matrix multiplication as a single-player game with scores proportional to algorithm efficiency and fed it into AlphaZero, which is notoriously good at games. And indeed properly modified AlphaZero dubbed AlphaTensor found new State-of-the-Art matrix multiplication algorithms for a wide range of fixed matrix sizes, including ones optimized for GPGPUs and TPUs specifically.
In a broader context this is indeed a huge leap in applying Reinforcement Learning to algorithms research. Expect a thick stream of papers feeding various kinds of algorithmic problems into more or less the same system.
If you strip all the nuances DeepMind found a way to represent matrix multiplication as a single-player game with scores proportional to algorithm efficiency and fed it into AlphaZero, which is notoriously good at games. And indeed properly modified AlphaZero dubbed AlphaTensor found new State-of-the-Art matrix multiplication algorithms for a wide range of fixed matrix sizes, including ones optimized for GPGPUs and TPUs specifically.
In a broader context this is indeed a huge leap in applying Reinforcement Learning to algorithms research. Expect a thick stream of papers feeding various kinds of algorithmic problems into more or less the same system.
Google DeepMind
Discovering novel algorithms with AlphaTensor
In our paper, published today in Nature, we introduce AlphaTensor, the first artificial intelligence (AI) system for discovering novel, efficient, and provably correct algorithms for fundamental task…