AlexTCH
313 subscribers
77 photos
4 videos
2 files
909 links
Что-то про программирование, что-то про Computer Science и Data Science, и немного кофе. Ну и всякая чушь вместо Твиттера. :)
Download Telegram
3blue1brown! 😂
👍8
https://www.arxiv-vanity.com/
> arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF.

WOW! 😳
🤯1
https://buttondown.email/hillelwayne/archive/why-you-should-read-data-and-reality/

Once more: we are not modeling reality, but the way information about reality is processed, by people. — Bill Kent

Эта фраза точечно объясняет, почему ООП по факту провалилось, как и примерно все остальные "методологии программирования" или проектирования.

Реальность многообразнее любой модели — это первое, что мы забываем, и оно же возвращается бумерангом чтобы хлопнуть нас по затылку в самый ответственный момент.

Ссылка на книгу внутри поста.
#free #book #modeling
👍3
https://arxiv.org/pdf/1809.02161.pdf
"Future Directions for Optimizing Compilers", Nuno P. Lopes, John Regehr
As software becomes larger, programming languages become higher-level, and processors continue to fail to be clocked faster, we’ll increasingly require compilers to reduce code bloat, eliminate abstraction penalties, and exploit interesting instruction sets. At the same time, compiler execution time must not increase too much and also compilers should never produce the wrong output. This paper examines the problem of making optimizing compilers faster, less buggy, and more capable of generating high-quality output.

Among other things the paper gives an overview of Solvers, Synthesis, and Superoptimizers, their applicability and challenges to compilers.
👍3
Больничный — это как очень-очень-очень стрёмный отпуск...
🤡3🤯1
Упростин-С
😁1🤡1
120. Adapting old programs to fit new machines usually means adapting new machines to behave like old ones.

Alan J. Perlis (http://pu.inf.uni-tuebingen.de/users/klaeren/epigrams.html)

Эта фраза точечно объясняет развитие компьютеров за последние 50 лет.
My occupation? Git-commit juggler.
🤡2
А мне всё похуй — я продукт мясной категории Б...
😱4👎1😢1
Мне одному кажется, что напёрсточники похожи на ОКРщиков? 🤔
🤡1
https://lemire.me/blog/2019/10/16/benchmarking-is-hard-processors-learn-to-predict-branches/

Dang! CPUs really do learn to predict branches. And learn fast!
If you're trying to benchmark how your code handles "cold" (fresh) data it will screw your tests real good.
👍2
— Страх убивает мозг!
— Убивает мозг?! О, то, что нужно!!!
(прыгает с парашютом)
🤡2
— Screw you.
— Оу, точка в конце предложения — как грубо!
🤡2👎1🤔1😢1👌1
https://www.linuxjournal.com/content/sqlite-secrecy-management-tools-and-methods
An Informix database was running under HP-UX on the U.S. battleship DDG-79 Oscar Austin, and during ship power losses, the database would not always restart without maintenance, presenting physical risks for the crew. SQLite is an answer to that danger; when used properly, it will transparently recover from such crashes.

Well I guess we all are much safer now... 😏
https://sciml.ai/news/2022/09/21/compile_time/
"How Julia ODE Solve Compile Time Was Reduced From 30 Seconds to 0.1"

And a side-cool-story of "we've replaced OpenBLAS LU-factorization with pure Julia implementation and outperformed Intel MKL thanks to JuliaSIMD ecosystem".

But mainly the post is a showcase for using SnoopCompile.jl, SnoopPrecompile.jl, FunctionWrappers.jl (and even FunctionWrappersWrappers.jl) plus some sensible user-level refactorings (separating type declarations from function implementations mainly) to drastically improve precompilation speed and efficiency. And then building tight System Images with PackageCompiler.jl. 😊

What I like best in this story is Julia community building tools to address problems it faces. As the saying goes "you can't optimize what you can't measure" so Tim Holy builds SnoopCompile.jl. And all of that stays user-level, meaning you can build your own tooling if existing doesn't cover your needs.

Virtually no patching of Julia interpreter/compiler itself was needed for this speedup, apart from general ongoing work on precompilation caching that promises even wider scope in the upcoming Julia 1.9 and benefits all the packages regardless. The rest is user-space, both on the package author and user side. Even changes to Base library as anybody can suggest a Pull Request that improves performance even further.
🔥2👏2🤯1
In other news. OpenBSD project released a new version control system implemented on top of (and thus fully compatible with) Git repository format: https://gameoftrees.org/

I for one have no idea who needs this apart from OpenBSD developers. Windows naturally is not supported (though MacOS is).
https://www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor

If you strip all the nuances DeepMind found a way to represent matrix multiplication as a single-player game with scores proportional to algorithm efficiency and fed it into AlphaZero, which is notoriously good at games. And indeed properly modified AlphaZero dubbed AlphaTensor found new State-of-the-Art matrix multiplication algorithms for a wide range of fixed matrix sizes, including ones optimized for GPGPUs and TPUs specifically.

In a broader context this is indeed a huge leap in applying Reinforcement Learning to algorithms research. Expect a thick stream of papers feeding various kinds of algorithmic problems into more or less the same system.