Reddit Programming
201 subscribers
1.22K photos
126K links
I will send you newest post from subreddit /r/programming
Download Telegram
Webinar on how to build your own programming language in C++ from the developers of a static analyzer
https://www.reddit.com/r/programming/comments/1r76yj2/webinar_on_how_to_build_your_own_programming/

<!-- SC_OFF -->PVS-Studio presents a series of webinars on how to build your own programming language in C++. In the first session, PVS-Studio will go over what's inside the "black box". In clear and plain terms, they'll explain what a lexer, parser, a semantic analyzer, and an evaluator are. Yuri Minaev, C++ architect at PVS-Studio, will talk about what these components are, why they're needed, and how they work. Welcome to join (https://pvs-studio.com/en/webinar/23/?utm_source=reddit) <!-- SC_ON --> submitted by /u/Xadartt (https://www.reddit.com/user/Xadartt)
[link] (https://pvs-studio.com/en/webinar/23/?utm_source=reddit) [comments] (https://www.reddit.com/r/programming/comments/1r76yj2/webinar_on_how_to_build_your_own_programming/)
Pytorch Now Uses Pyrefly for Type Checking
https://www.reddit.com/r/programming/comments/1r777dn/pytorch_now_uses_pyrefly_for_type_checking/

<!-- SC_OFF -->From the official Pytorch blog: We’re excited to share that PyTorch now leverages Pyrefly to power type checking across our core repository, along with a number of projects in the PyTorch ecosystem: Helion, TorchTitan and Ignite. For a project the size of PyTorch, leveraging typing and type checking has long been essential for ensuring consistency and preventing common bugs that often go unnoticed in dynamic code. Migrating to Pyrefly brings a much needed upgrade to these development workflows, with lightning-fast, standards-compliant type checking and a modern IDE experience. With Pyrefly, our maintainers and contributors can catch bugs earlier, benefit from consistent results between local and CI runs, and take advantage of advanced typing features. In this blog post, we’ll share why we made this transition and highlight the improvements PyTorch has already experienced since adopting Pyrefly. Full blog post: https://pytorch.org/blog/pyrefly-now-type-checks-pytorch/ <!-- SC_ON --> submitted by /u/BeamMeUpBiscotti (https://www.reddit.com/user/BeamMeUpBiscotti)
[link] (https://pytorch.org/blog/pyrefly-now-type-checks-pytorch/) [comments] (https://www.reddit.com/r/programming/comments/1r777dn/pytorch_now_uses_pyrefly_for_type_checking/)
Effortless repository-based session history organization for DeepWiki
https://www.reddit.com/r/programming/comments/1r7ahk7/effortless_repositorybased_session_history/

<!-- SC_OFF -->When using DeepWiki extensively across multiple OSS repositories, search sessions can quickly pile up, making it hard to keep track of context per repo. To help with this workflow issue, this desktop application wraps DeepWiki in a WebView, tracks URL changes, and groups sessions by repository automatically. Features Display of repositories and their sessions By automatic tracking of DeepWiki URL changes Right-click context menu for easy deletion of repositories and sessions from UI Also renames the sessions for clarity Check for updates to notify users when a new version is available <!-- SC_ON --> submitted by /u/aqny (https://www.reddit.com/user/aqny)
[link] (https://github.com/ynqa/dwb) [comments] (https://www.reddit.com/r/programming/comments/1r7ahk7/effortless_repositorybased_session_history/)
Best way to persist an in-memory cache in Go? (gob vs mmap vs flatbuffers)
https://www.reddit.com/r/programming/comments/1r7v9wx/best_way_to_persist_an_inmemory_cache_in_go_gob/

<!-- SC_OFF -->Building a semantic cache in Go. ~10K entries, each is a string key + 1.25 KB binary vector + cached value. ~15 MB total. Works great in-memory but every restart means a cold cache. I want: fast startup ( Options I'm weighing: encoding/gob -dump to file on shutdown, load on start. Zero deps, dead simple. Fast enough for 15 MB? mmap - memory-map the file, writes hit disk automatically. Fast but feels like overkill for this size? FlatBuffers/protobuf - faster decode than gob, stable wire format. Worth adding a dep? SQLite - mature, crash-safe. But is it overkilll for a cache? Anyone have experience with gob at this scale? Is mmap worth the complexity, or am I overthinking a 15 MB file? Other patterns I'm not seeing? <!-- SC_ON --> submitted by /u/Sweet-Demand-7971 (https://www.reddit.com/user/Sweet-Demand-7971)
[link] (https://www.reddit.com/r/golang/comments/1em9u6t/json_vs_flatbuffers_vs_protocol_buffers/) [comments] (https://www.reddit.com/r/programming/comments/1r7v9wx/best_way_to_persist_an_inmemory_cache_in_go_gob/)
The fundamental contradiction of decentralized physical infrastructure
https://www.reddit.com/r/programming/comments/1r8cobb/the_fundamental_contradiction_of_decentralized/

<!-- SC_OFF -->How do you decentralize something that needs permits, power grids, physical security, and regulatory compliance? Turns out: you mostly don't. https://cybernews-node.blogspot.com/2026/02/depins-still-more-decentralized-dream.html <!-- SC_ON --> submitted by /u/No_Fisherman1212 (https://www.reddit.com/user/No_Fisherman1212)
[link] (https://cybernews-node.blogspot.com/2026/02/depins-still-more-decentralized-dream.html) [comments] (https://www.reddit.com/r/programming/comments/1r8cobb/the_fundamental_contradiction_of_decentralized/)
The programming language coding agents perform best in isn’t Python, TypeScript, or Java. It’s the functional programming language Elixir.
https://www.reddit.com/r/programming/comments/1r8nbtz/the_programming_language_coding_agents_perform/

<!-- SC_OFF -->I've felt this myself. Moving to a functional architecture gave my codebase the single largest devprod boost. My take is that FP and its patterns enforce:
- A more efficient representation of the actual system, with less accidental complexity
- Clearer human/AI division of labour
- Structural guardrails that replace unreliable discipline Why? Token efficiency. One line = perfect context In FP, a function signature tells you input type, output type, and in strong FP languages, the side effects (monads!). In OOP, side effects are scattered, the model has to retrieve more context that’s more spread out. That’s context bloat and cognitive load for the model. Agents are excellent at mapping patterns You can think of them as a function: `f(pattern_in, context, constraints) => pattern_out` They compress training data into a world model, then map between representations. So English to Rust is a piece of cake. Not so with novel architecture. Therefore to make the best use of agents, our job becomes defining the high-level patterns. In FP, the functional composition and type signatures ARE the patterns. It’s easier to distinguish the architecture from the lower-level code. Pushes impurity to the edge LLMs write pure functions amazingly well. They’re easy to test and defined entirely by contiguous text. Impure functions’ side effects are harder to test. In my codebase, pure and impure functions are separated into different folders. This way I can direct my attention to only the high-risk changes: I review functional composition (the architecture), edge functions, and test case summaries closely, ignore pure function bodies. FP enforces best practices Purity is default, opt INTO side effects. Immutability is default, opt INTO mutation. Agents are surprisingly lazy. They will use tools however they want. I wrote an MCP tool for agents to create graphs, it kept creating single nodes. So I blocked it if node length was too long, but with an option to override if it read the instructions and explained why. What did Claude do? It didn’t read the instructions, overrode every time with plausible explanations. When I removed the override ability, the behaviour I wanted was enforced, with the small tradeoff of reduced flexibility. FP philosophy. Both myself and LLMs perform better with FP. I don’t think it’s about the specifics of the languages but the emergent architectures it encourages. Would love to hear from engineers who have been using coding agents in FP codebases. <!-- SC_ON --> submitted by /u/manummasson (https://www.reddit.com/user/manummasson)
[link] (https://github.com/Tencent-Hunyuan/AutoCodeBenchmark/) [comments] (https://www.reddit.com/r/programming/comments/1r8nbtz/the_programming_language_coding_agents_perform/)
AI, Entropy, and the Illusion of Convergence in Modern Software
https://www.reddit.com/r/programming/comments/1r8u5kq/ai_entropy_and_the_illusion_of_convergence_in/

<!-- SC_OFF -->Hey everyone!
I just started a blog recently, and last week I finally published my first longer technical blog post: It's about entropy, divergence vs. convergence, and why tests aren’t just verification - they’re convergence mechanisms. tldr;
-----
AI tools have dramatically reduced the cost of divergence: exploration, variation, and rapid generation of code and tests. In healthy systems, divergence must be followed by convergence, the deliberate effort of collapsing possibilities into contracts that define what must remain true. Tests, reframed this way, are not just checks but convergence mechanisms: they encode commitments the system will actively defend over time. When divergence becomes nearly frictionless and convergence doesn’t, systems expand faster than humans can converge them. The result? Tests that mirror incidental implementation detail instead of encoding stable intent. Instead of reversing entropy, they amplify it by committing the system to things that were never meant to be stable.
----- If you're interested, give it a read, I'd appreciate it.
If not, maybe let me know what I could do better! Appreciate any feedback, and happy to partake in discussions :) <!-- SC_ON --> submitted by /u/TranslatorRude4917 (https://www.reddit.com/user/TranslatorRude4917)
[link] (https://www.abelenekes.com/p/when-change-becomes-cheaper-than-commitment) [comments] (https://www.reddit.com/r/programming/comments/1r8u5kq/ai_entropy_and_the_illusion_of_convergence_in/)