Reddit Programming
201 subscribers
1.22K photos
126K links
I will send you newest post from subreddit /r/programming
Download Telegram
The fundamental contradiction of decentralized physical infrastructure
https://www.reddit.com/r/programming/comments/1r8cobb/the_fundamental_contradiction_of_decentralized/

<!-- SC_OFF -->How do you decentralize something that needs permits, power grids, physical security, and regulatory compliance? Turns out: you mostly don't. https://cybernews-node.blogspot.com/2026/02/depins-still-more-decentralized-dream.html <!-- SC_ON --> submitted by /u/No_Fisherman1212 (https://www.reddit.com/user/No_Fisherman1212)
[link] (https://cybernews-node.blogspot.com/2026/02/depins-still-more-decentralized-dream.html) [comments] (https://www.reddit.com/r/programming/comments/1r8cobb/the_fundamental_contradiction_of_decentralized/)
The programming language coding agents perform best in isn’t Python, TypeScript, or Java. It’s the functional programming language Elixir.
https://www.reddit.com/r/programming/comments/1r8nbtz/the_programming_language_coding_agents_perform/

<!-- SC_OFF -->I've felt this myself. Moving to a functional architecture gave my codebase the single largest devprod boost. My take is that FP and its patterns enforce:
- A more efficient representation of the actual system, with less accidental complexity
- Clearer human/AI division of labour
- Structural guardrails that replace unreliable discipline Why? Token efficiency. One line = perfect context In FP, a function signature tells you input type, output type, and in strong FP languages, the side effects (monads!). In OOP, side effects are scattered, the model has to retrieve more context that’s more spread out. That’s context bloat and cognitive load for the model. Agents are excellent at mapping patterns You can think of them as a function: `f(pattern_in, context, constraints) => pattern_out` They compress training data into a world model, then map between representations. So English to Rust is a piece of cake. Not so with novel architecture. Therefore to make the best use of agents, our job becomes defining the high-level patterns. In FP, the functional composition and type signatures ARE the patterns. It’s easier to distinguish the architecture from the lower-level code. Pushes impurity to the edge LLMs write pure functions amazingly well. They’re easy to test and defined entirely by contiguous text. Impure functions’ side effects are harder to test. In my codebase, pure and impure functions are separated into different folders. This way I can direct my attention to only the high-risk changes: I review functional composition (the architecture), edge functions, and test case summaries closely, ignore pure function bodies. FP enforces best practices Purity is default, opt INTO side effects. Immutability is default, opt INTO mutation. Agents are surprisingly lazy. They will use tools however they want. I wrote an MCP tool for agents to create graphs, it kept creating single nodes. So I blocked it if node length was too long, but with an option to override if it read the instructions and explained why. What did Claude do? It didn’t read the instructions, overrode every time with plausible explanations. When I removed the override ability, the behaviour I wanted was enforced, with the small tradeoff of reduced flexibility. FP philosophy. Both myself and LLMs perform better with FP. I don’t think it’s about the specifics of the languages but the emergent architectures it encourages. Would love to hear from engineers who have been using coding agents in FP codebases. <!-- SC_ON --> submitted by /u/manummasson (https://www.reddit.com/user/manummasson)
[link] (https://github.com/Tencent-Hunyuan/AutoCodeBenchmark/) [comments] (https://www.reddit.com/r/programming/comments/1r8nbtz/the_programming_language_coding_agents_perform/)
AI, Entropy, and the Illusion of Convergence in Modern Software
https://www.reddit.com/r/programming/comments/1r8u5kq/ai_entropy_and_the_illusion_of_convergence_in/

<!-- SC_OFF -->Hey everyone!
I just started a blog recently, and last week I finally published my first longer technical blog post: It's about entropy, divergence vs. convergence, and why tests aren’t just verification - they’re convergence mechanisms. tldr;
-----
AI tools have dramatically reduced the cost of divergence: exploration, variation, and rapid generation of code and tests. In healthy systems, divergence must be followed by convergence, the deliberate effort of collapsing possibilities into contracts that define what must remain true. Tests, reframed this way, are not just checks but convergence mechanisms: they encode commitments the system will actively defend over time. When divergence becomes nearly frictionless and convergence doesn’t, systems expand faster than humans can converge them. The result? Tests that mirror incidental implementation detail instead of encoding stable intent. Instead of reversing entropy, they amplify it by committing the system to things that were never meant to be stable.
----- If you're interested, give it a read, I'd appreciate it.
If not, maybe let me know what I could do better! Appreciate any feedback, and happy to partake in discussions :) <!-- SC_ON --> submitted by /u/TranslatorRude4917 (https://www.reddit.com/user/TranslatorRude4917)
[link] (https://www.abelenekes.com/p/when-change-becomes-cheaper-than-commitment) [comments] (https://www.reddit.com/r/programming/comments/1r8u5kq/ai_entropy_and_the_illusion_of_convergence_in/)
MySQL and PostgreSQL: different approaches to solve the same problem
https://www.reddit.com/r/programming/comments/1r90loc/mysql_and_postgresql_different_approaches_to/

<!-- SC_OFF -->Both DBs solve the same problem: How to most effectively store and provide access to data, in an ACID-compliant way? ACID compliance might be implemented in various ways and SQL databases can vary quite substantially how they choose to go about it. MySQL in particular, with the default InnoDB engine, takes a completely different approach to Postgres. Both implementations have their own tradeoffs, set of advantages and disadvantages. In theory, the MySQL (InnoDB) approach should have an edge for: partial updates of tables with more indexes - not all indexes but only of changed columns have to be modified querying tables by the Primary Key - index is the table so it should be as fast as it gets, since data is read from a single place previous row versions are stored in a separate space on the disk, therefore active transactions are less affected by the potentially large older row versions Postgres advantages are: uniform search performance for all indexes - there is no primary/secondary index distinction, performance is the same for all of them smaller penalty for random inserts because tables are stored on a heap, in random order, in contrast with sorted MySQL Clustered Index (table) previously started transactions have better access to prior row versions, since they are stored in the same disk space there is less need for locking (virtually none) to support more demanding isolation levels and concurrent access - previous row versions are stored in the same disk space and can be considered or discarded based on special columns (xmin, xmax mostly) In theory, theory and practice are the same. But, let's see how it is in practice! <!-- SC_ON --> submitted by /u/BinaryIgor (https://www.reddit.com/user/BinaryIgor)
[link] (https://binaryigor.com/mysql-and-postgresql-different-approaches.html) [comments] (https://www.reddit.com/r/programming/comments/1r90loc/mysql_and_postgresql_different_approaches_to/)
Lessons learned building a cross-language plot capture engine in R & Python
https://www.reddit.com/r/programming/comments/1r93zyo/lessons_learned_building_a_crosslanguage_plot/

<!-- SC_OFF -->I spent a lot of time trying to build a "zero-config" plot capture system for both R and Python. It turns out the two languages have fundamentally different philosophies on how pixels get to the screen which make this easy in Python and super hard in R. I wrote a deep dive comparing the display architectures in both languages, including some admittedly hacky ways to find figure objects through stack inspection. Hope it helps someone avoid our mistakes! <!-- SC_ON --> submitted by /u/mpacula (https://www.reddit.com/user/mpacula)
[link] (https://quickanalysis.substack.com/p/capturing-plots-in-r-and-python-a) [comments] (https://www.reddit.com/r/programming/comments/1r93zyo/lessons_learned_building_a_crosslanguage_plot/)
Most Developers Don’t Build New Things
https://www.reddit.com/r/programming/comments/1r94m2g/most_developers_dont_build_new_things/

<!-- SC_OFF -->I wrote this after noticing how much framework discussion focuses on greenfield work. In practice, most teams I see are inside 10 or 12-year-old systems, evolving them under real constraints. The piece is about that “second act” of software. After launch. After early growth. When reliability and discipline matter more than novelty. Curious how others here think about this. <!-- SC_ON --> submitted by /u/robbyrussell (https://www.reddit.com/user/robbyrussell)
[link] (https://robbyonrails.com/articles/2026/02/18/most-developers-dont-build-new-things/) [comments] (https://www.reddit.com/r/programming/comments/1r94m2g/most_developers_dont_build_new_things/)
AWS suffered ‘at least two outages’ caused by AI tools, and now I’m convinced we’re living inside a ‘Silicon Valley’ episode
https://www.reddit.com/r/programming/comments/1r9xd58/aws_suffered_at_least_two_outages_caused_by_ai/

<!-- SC_OFF -->"The most efficient way to get rid of all the bugs was to get rid of all the software, which is technically and statistically correct." <!-- SC_ON --> submitted by /u/squishygorilla (https://www.reddit.com/user/squishygorilla)
[link] (https://www.tomsguide.com/computing/aws-suffered-at-least-two-outages-caused-by-ai-tools-and-now-im-convinced-were-living-inside-a-silicon-valley-episode) [comments] (https://www.reddit.com/r/programming/comments/1r9xd58/aws_suffered_at_least_two_outages_caused_by_ai/)