Runtime validation in type annotations
https://www.reddit.com/r/programming/comments/1r6zc2r/runtime_validation_in_type_annotations/
submitted by /u/Xadartt (https://www.reddit.com/user/Xadartt)
[link] (https://blog.natfu.be/validation-in-type-annotations/) [comments] (https://www.reddit.com/r/programming/comments/1r6zc2r/runtime_validation_in_type_annotations/)
https://www.reddit.com/r/programming/comments/1r6zc2r/runtime_validation_in_type_annotations/
submitted by /u/Xadartt (https://www.reddit.com/user/Xadartt)
[link] (https://blog.natfu.be/validation-in-type-annotations/) [comments] (https://www.reddit.com/r/programming/comments/1r6zc2r/runtime_validation_in_type_annotations/)
Peer-reviewed study: AI-generated changes fail more often in unhealthy code (30%+ higher defect risk)
https://www.reddit.com/r/programming/comments/1r70jbb/peerreviewed_study_aigenerated_changes_fail_more/
<!-- SC_OFF -->We recently published research, “Code for Machines, Not Just Humans: Quantifying AI-Friendliness with Code Health Metrics.” In the study, we analyzed AI-generated refactorings across 5,000 real programs using six different LLMs. We measured whether the changes preserved behavior while keeping tests passing. One result stood out: AI-generated changes failed significantly more often in unhealthy code, with defect risk increasing by at least 30%. Some important nuance: The study only included code with Code Health ≥ 7.0. Truly low-quality legacy modules (scores 4, 3, or 1) were not included. The 30% increase was observed in code that was still relatively maintainable. Based on prior Code Health research, breakage rates in deeply unhealthy legacy systems are likely non-linear and could increase steeply. The paper argues that Code Health is a key factor in whether AI coding assistants accelerate development or amplify defect risk. The traditional maxim says code must be written for humans to read. With AI increasingly modifying code, it may also need to be structured in ways machines can reliably interpret. Our data suggests AI performance is tightly coupled to the structural health of the system it’s applied to: Healthy code → AI behaves more predictably Unhealthy code → defect rates rise sharply This mirrors long-standing findings about human defect rates in complex systems. Are you seeing different AI outcomes depending on which parts of the codebase the model touches? Disclosure: I work at CodeScene (the company behind the study). I’m not one of the authors, but I wanted to share the findings here for discussion. If useful, we’re also hosting a technical session next week to go deeper into the methodology and architectural implications, happy to share details. <!-- SC_ON --> submitted by /u/Summer_Flower_7648 (https://www.reddit.com/user/Summer_Flower_7648)
[link] (https://codescene.com/hubfs/whitepapers/AI-Ready-Code-How-Code-Health-Determines-AI-Performance.pdf) [comments] (https://www.reddit.com/r/programming/comments/1r70jbb/peerreviewed_study_aigenerated_changes_fail_more/)
https://www.reddit.com/r/programming/comments/1r70jbb/peerreviewed_study_aigenerated_changes_fail_more/
<!-- SC_OFF -->We recently published research, “Code for Machines, Not Just Humans: Quantifying AI-Friendliness with Code Health Metrics.” In the study, we analyzed AI-generated refactorings across 5,000 real programs using six different LLMs. We measured whether the changes preserved behavior while keeping tests passing. One result stood out: AI-generated changes failed significantly more often in unhealthy code, with defect risk increasing by at least 30%. Some important nuance: The study only included code with Code Health ≥ 7.0. Truly low-quality legacy modules (scores 4, 3, or 1) were not included. The 30% increase was observed in code that was still relatively maintainable. Based on prior Code Health research, breakage rates in deeply unhealthy legacy systems are likely non-linear and could increase steeply. The paper argues that Code Health is a key factor in whether AI coding assistants accelerate development or amplify defect risk. The traditional maxim says code must be written for humans to read. With AI increasingly modifying code, it may also need to be structured in ways machines can reliably interpret. Our data suggests AI performance is tightly coupled to the structural health of the system it’s applied to: Healthy code → AI behaves more predictably Unhealthy code → defect rates rise sharply This mirrors long-standing findings about human defect rates in complex systems. Are you seeing different AI outcomes depending on which parts of the codebase the model touches? Disclosure: I work at CodeScene (the company behind the study). I’m not one of the authors, but I wanted to share the findings here for discussion. If useful, we’re also hosting a technical session next week to go deeper into the methodology and architectural implications, happy to share details. <!-- SC_ON --> submitted by /u/Summer_Flower_7648 (https://www.reddit.com/user/Summer_Flower_7648)
[link] (https://codescene.com/hubfs/whitepapers/AI-Ready-Code-How-Code-Health-Determines-AI-Performance.pdf) [comments] (https://www.reddit.com/r/programming/comments/1r70jbb/peerreviewed_study_aigenerated_changes_fail_more/)
How would you design a Distributed Cache for a High-Traffic System?
https://www.reddit.com/r/programming/comments/1r72rzk/how_would_you_design_a_distributed_cache_for_a/
submitted by /u/javinpaul (https://www.reddit.com/user/javinpaul)
[link] (https://javarevisited.substack.com/p/how-would-you-design-a-distributed) [comments] (https://www.reddit.com/r/programming/comments/1r72rzk/how_would_you_design_a_distributed_cache_for_a/)
https://www.reddit.com/r/programming/comments/1r72rzk/how_would_you_design_a_distributed_cache_for_a/
submitted by /u/javinpaul (https://www.reddit.com/user/javinpaul)
[link] (https://javarevisited.substack.com/p/how-would-you-design-a-distributed) [comments] (https://www.reddit.com/r/programming/comments/1r72rzk/how_would_you_design_a_distributed_cache_for_a/)
SOLID in FP: Single Responsibility, or How Pure Functions Solved It Already · cekrem.github.io
https://www.reddit.com/r/programming/comments/1r75tvu/solid_in_fp_single_responsibility_or_how_pure/
submitted by /u/cekrem (https://www.reddit.com/user/cekrem)
[link] (https://cekrem.github.io/posts/solid-in-fp-single-responsibility/) [comments] (https://www.reddit.com/r/programming/comments/1r75tvu/solid_in_fp_single_responsibility_or_how_pure/)
https://www.reddit.com/r/programming/comments/1r75tvu/solid_in_fp_single_responsibility_or_how_pure/
submitted by /u/cekrem (https://www.reddit.com/user/cekrem)
[link] (https://cekrem.github.io/posts/solid-in-fp-single-responsibility/) [comments] (https://www.reddit.com/r/programming/comments/1r75tvu/solid_in_fp_single_responsibility_or_how_pure/)
Webinar on how to build your own programming language in C++ from the developers of a static analyzer
https://www.reddit.com/r/programming/comments/1r76yj2/webinar_on_how_to_build_your_own_programming/
<!-- SC_OFF -->PVS-Studio presents a series of webinars on how to build your own programming language in C++. In the first session, PVS-Studio will go over what's inside the "black box". In clear and plain terms, they'll explain what a lexer, parser, a semantic analyzer, and an evaluator are. Yuri Minaev, C++ architect at PVS-Studio, will talk about what these components are, why they're needed, and how they work. Welcome to join (https://pvs-studio.com/en/webinar/23/?utm_source=reddit) <!-- SC_ON --> submitted by /u/Xadartt (https://www.reddit.com/user/Xadartt)
[link] (https://pvs-studio.com/en/webinar/23/?utm_source=reddit) [comments] (https://www.reddit.com/r/programming/comments/1r76yj2/webinar_on_how_to_build_your_own_programming/)
https://www.reddit.com/r/programming/comments/1r76yj2/webinar_on_how_to_build_your_own_programming/
<!-- SC_OFF -->PVS-Studio presents a series of webinars on how to build your own programming language in C++. In the first session, PVS-Studio will go over what's inside the "black box". In clear and plain terms, they'll explain what a lexer, parser, a semantic analyzer, and an evaluator are. Yuri Minaev, C++ architect at PVS-Studio, will talk about what these components are, why they're needed, and how they work. Welcome to join (https://pvs-studio.com/en/webinar/23/?utm_source=reddit) <!-- SC_ON --> submitted by /u/Xadartt (https://www.reddit.com/user/Xadartt)
[link] (https://pvs-studio.com/en/webinar/23/?utm_source=reddit) [comments] (https://www.reddit.com/r/programming/comments/1r76yj2/webinar_on_how_to_build_your_own_programming/)
The Servo project and its impact on the web platform ecosystem
https://www.reddit.com/r/programming/comments/1r772gl/the_servo_project_and_its_impact_on_the_web/
submitted by /u/fpcoder (https://www.reddit.com/user/fpcoder)
[link] (https://servo.org/slides/2026-02-fosdem-servo-web-platform/) [comments] (https://www.reddit.com/r/programming/comments/1r772gl/the_servo_project_and_its_impact_on_the_web/)
https://www.reddit.com/r/programming/comments/1r772gl/the_servo_project_and_its_impact_on_the_web/
submitted by /u/fpcoder (https://www.reddit.com/user/fpcoder)
[link] (https://servo.org/slides/2026-02-fosdem-servo-web-platform/) [comments] (https://www.reddit.com/r/programming/comments/1r772gl/the_servo_project_and_its_impact_on_the_web/)
Pytorch Now Uses Pyrefly for Type Checking
https://www.reddit.com/r/programming/comments/1r777dn/pytorch_now_uses_pyrefly_for_type_checking/
<!-- SC_OFF -->From the official Pytorch blog: We’re excited to share that PyTorch now leverages Pyrefly to power type checking across our core repository, along with a number of projects in the PyTorch ecosystem: Helion, TorchTitan and Ignite. For a project the size of PyTorch, leveraging typing and type checking has long been essential for ensuring consistency and preventing common bugs that often go unnoticed in dynamic code. Migrating to Pyrefly brings a much needed upgrade to these development workflows, with lightning-fast, standards-compliant type checking and a modern IDE experience. With Pyrefly, our maintainers and contributors can catch bugs earlier, benefit from consistent results between local and CI runs, and take advantage of advanced typing features. In this blog post, we’ll share why we made this transition and highlight the improvements PyTorch has already experienced since adopting Pyrefly. Full blog post: https://pytorch.org/blog/pyrefly-now-type-checks-pytorch/ <!-- SC_ON --> submitted by /u/BeamMeUpBiscotti (https://www.reddit.com/user/BeamMeUpBiscotti)
[link] (https://pytorch.org/blog/pyrefly-now-type-checks-pytorch/) [comments] (https://www.reddit.com/r/programming/comments/1r777dn/pytorch_now_uses_pyrefly_for_type_checking/)
https://www.reddit.com/r/programming/comments/1r777dn/pytorch_now_uses_pyrefly_for_type_checking/
<!-- SC_OFF -->From the official Pytorch blog: We’re excited to share that PyTorch now leverages Pyrefly to power type checking across our core repository, along with a number of projects in the PyTorch ecosystem: Helion, TorchTitan and Ignite. For a project the size of PyTorch, leveraging typing and type checking has long been essential for ensuring consistency and preventing common bugs that often go unnoticed in dynamic code. Migrating to Pyrefly brings a much needed upgrade to these development workflows, with lightning-fast, standards-compliant type checking and a modern IDE experience. With Pyrefly, our maintainers and contributors can catch bugs earlier, benefit from consistent results between local and CI runs, and take advantage of advanced typing features. In this blog post, we’ll share why we made this transition and highlight the improvements PyTorch has already experienced since adopting Pyrefly. Full blog post: https://pytorch.org/blog/pyrefly-now-type-checks-pytorch/ <!-- SC_ON --> submitted by /u/BeamMeUpBiscotti (https://www.reddit.com/user/BeamMeUpBiscotti)
[link] (https://pytorch.org/blog/pyrefly-now-type-checks-pytorch/) [comments] (https://www.reddit.com/r/programming/comments/1r777dn/pytorch_now_uses_pyrefly_for_type_checking/)
Effortless repository-based session history organization for DeepWiki
https://www.reddit.com/r/programming/comments/1r7ahk7/effortless_repositorybased_session_history/
<!-- SC_OFF -->When using DeepWiki extensively across multiple OSS repositories, search sessions can quickly pile up, making it hard to keep track of context per repo. To help with this workflow issue, this desktop application wraps DeepWiki in a WebView, tracks URL changes, and groups sessions by repository automatically. Features Display of repositories and their sessions By automatic tracking of DeepWiki URL changes Right-click context menu for easy deletion of repositories and sessions from UI Also renames the sessions for clarity Check for updates to notify users when a new version is available <!-- SC_ON --> submitted by /u/aqny (https://www.reddit.com/user/aqny)
[link] (https://github.com/ynqa/dwb) [comments] (https://www.reddit.com/r/programming/comments/1r7ahk7/effortless_repositorybased_session_history/)
https://www.reddit.com/r/programming/comments/1r7ahk7/effortless_repositorybased_session_history/
<!-- SC_OFF -->When using DeepWiki extensively across multiple OSS repositories, search sessions can quickly pile up, making it hard to keep track of context per repo. To help with this workflow issue, this desktop application wraps DeepWiki in a WebView, tracks URL changes, and groups sessions by repository automatically. Features Display of repositories and their sessions By automatic tracking of DeepWiki URL changes Right-click context menu for easy deletion of repositories and sessions from UI Also renames the sessions for clarity Check for updates to notify users when a new version is available <!-- SC_ON --> submitted by /u/aqny (https://www.reddit.com/user/aqny)
[link] (https://github.com/ynqa/dwb) [comments] (https://www.reddit.com/r/programming/comments/1r7ahk7/effortless_repositorybased_session_history/)
The Interest Rate on Your Codebase: A Financial Framework for Technical Debt
https://www.reddit.com/r/programming/comments/1r7cyeg/the_interest_rate_on_your_codebase_a_financial/
submitted by /u/misterchiply (https://www.reddit.com/user/misterchiply)
[link] (https://www.chiply.dev/post-technical-debt) [comments] (https://www.reddit.com/r/programming/comments/1r7cyeg/the_interest_rate_on_your_codebase_a_financial/)
https://www.reddit.com/r/programming/comments/1r7cyeg/the_interest_rate_on_your_codebase_a_financial/
submitted by /u/misterchiply (https://www.reddit.com/user/misterchiply)
[link] (https://www.chiply.dev/post-technical-debt) [comments] (https://www.reddit.com/r/programming/comments/1r7cyeg/the_interest_rate_on_your_codebase_a_financial/)
Claude vs Gemini: Which AI Writes Better Code in 2026?
https://www.reddit.com/r/programming/comments/1r7u6xk/claude_vs_gemini_which_ai_writes_better_code_in/
submitted by /u/RevolutionaryHeart24 (https://www.reddit.com/user/RevolutionaryHeart24)
[link] (https://indiascope.in/claude-vs-gemini-code-2026/) [comments] (https://www.reddit.com/r/programming/comments/1r7u6xk/claude_vs_gemini_which_ai_writes_better_code_in/)
https://www.reddit.com/r/programming/comments/1r7u6xk/claude_vs_gemini_which_ai_writes_better_code_in/
submitted by /u/RevolutionaryHeart24 (https://www.reddit.com/user/RevolutionaryHeart24)
[link] (https://indiascope.in/claude-vs-gemini-code-2026/) [comments] (https://www.reddit.com/r/programming/comments/1r7u6xk/claude_vs_gemini_which_ai_writes_better_code_in/)
Best way to persist an in-memory cache in Go? (gob vs mmap vs flatbuffers)
https://www.reddit.com/r/programming/comments/1r7v9wx/best_way_to_persist_an_inmemory_cache_in_go_gob/
<!-- SC_OFF -->Building a semantic cache in Go. ~10K entries, each is a string key + 1.25 KB binary vector + cached value. ~15 MB total. Works great in-memory but every restart means a cold cache. I want: fast startup ( Options I'm weighing: encoding/gob -dump to file on shutdown, load on start. Zero deps, dead simple. Fast enough for 15 MB? mmap - memory-map the file, writes hit disk automatically. Fast but feels like overkill for this size? FlatBuffers/protobuf - faster decode than gob, stable wire format. Worth adding a dep? SQLite - mature, crash-safe. But is it overkilll for a cache? Anyone have experience with gob at this scale? Is mmap worth the complexity, or am I overthinking a 15 MB file? Other patterns I'm not seeing? <!-- SC_ON --> submitted by /u/Sweet-Demand-7971 (https://www.reddit.com/user/Sweet-Demand-7971)
[link] (https://www.reddit.com/r/golang/comments/1em9u6t/json_vs_flatbuffers_vs_protocol_buffers/) [comments] (https://www.reddit.com/r/programming/comments/1r7v9wx/best_way_to_persist_an_inmemory_cache_in_go_gob/)
https://www.reddit.com/r/programming/comments/1r7v9wx/best_way_to_persist_an_inmemory_cache_in_go_gob/
<!-- SC_OFF -->Building a semantic cache in Go. ~10K entries, each is a string key + 1.25 KB binary vector + cached value. ~15 MB total. Works great in-memory but every restart means a cold cache. I want: fast startup ( Options I'm weighing: encoding/gob -dump to file on shutdown, load on start. Zero deps, dead simple. Fast enough for 15 MB? mmap - memory-map the file, writes hit disk automatically. Fast but feels like overkill for this size? FlatBuffers/protobuf - faster decode than gob, stable wire format. Worth adding a dep? SQLite - mature, crash-safe. But is it overkilll for a cache? Anyone have experience with gob at this scale? Is mmap worth the complexity, or am I overthinking a 15 MB file? Other patterns I'm not seeing? <!-- SC_ON --> submitted by /u/Sweet-Demand-7971 (https://www.reddit.com/user/Sweet-Demand-7971)
[link] (https://www.reddit.com/r/golang/comments/1em9u6t/json_vs_flatbuffers_vs_protocol_buffers/) [comments] (https://www.reddit.com/r/programming/comments/1r7v9wx/best_way_to_persist_an_inmemory_cache_in_go_gob/)
From Cron to Distributed Schedulers: Scaling Job Execution to Thousands of Jobs per Second
https://www.reddit.com/r/programming/comments/1r7xwx8/from_cron_to_distributed_schedulers_scaling_job/
submitted by /u/Local_Ad_6109 (https://www.reddit.com/user/Local_Ad_6109)
[link] (https://animeshgaitonde.medium.com/from-cron-to-distributed-schedulers-scaling-job-execution-to-thousands-of-jobs-per-second-ef05955bf3d9?sk=4446379bce79c4262046f69ef2cbcebb) [comments] (https://www.reddit.com/r/programming/comments/1r7xwx8/from_cron_to_distributed_schedulers_scaling_job/)
https://www.reddit.com/r/programming/comments/1r7xwx8/from_cron_to_distributed_schedulers_scaling_job/
submitted by /u/Local_Ad_6109 (https://www.reddit.com/user/Local_Ad_6109)
[link] (https://animeshgaitonde.medium.com/from-cron-to-distributed-schedulers-scaling-job-execution-to-thousands-of-jobs-per-second-ef05955bf3d9?sk=4446379bce79c4262046f69ef2cbcebb) [comments] (https://www.reddit.com/r/programming/comments/1r7xwx8/from_cron_to_distributed_schedulers_scaling_job/)
Four Column ASCII (2017)
https://www.reddit.com/r/programming/comments/1r7ybw8/four_column_ascii_2017/
submitted by /u/schmul112 (https://www.reddit.com/user/schmul112)
[link] (https://garbagecollected.org/2017/01/31/four-column-ascii/) [comments] (https://www.reddit.com/r/programming/comments/1r7ybw8/four_column_ascii_2017/)
https://www.reddit.com/r/programming/comments/1r7ybw8/four_column_ascii_2017/
submitted by /u/schmul112 (https://www.reddit.com/user/schmul112)
[link] (https://garbagecollected.org/2017/01/31/four-column-ascii/) [comments] (https://www.reddit.com/r/programming/comments/1r7ybw8/four_column_ascii_2017/)
Coding Agents & Language Evolution: Navigating Uncharted Waters • José Valim
https://www.reddit.com/r/programming/comments/1r82lwm/coding_agents_language_evolution_navigating/
submitted by /u/goto-con (https://www.reddit.com/user/goto-con)
[link] (https://youtu.be/VZcDxkFj_9E) [comments] (https://www.reddit.com/r/programming/comments/1r82lwm/coding_agents_language_evolution_navigating/)
https://www.reddit.com/r/programming/comments/1r82lwm/coding_agents_language_evolution_navigating/
submitted by /u/goto-con (https://www.reddit.com/user/goto-con)
[link] (https://youtu.be/VZcDxkFj_9E) [comments] (https://www.reddit.com/r/programming/comments/1r82lwm/coding_agents_language_evolution_navigating/)
Volume Scaling Techniques for Improved Lattice Attacks in Python
https://www.reddit.com/r/programming/comments/1r83brx/volume_scaling_techniques_for_improved_lattice/
submitted by /u/DataBaeBee (https://www.reddit.com/user/DataBaeBee)
[link] (https://leetarxiv.substack.com/p/guessing-bits-improved-lattice-attacks) [comments] (https://www.reddit.com/r/programming/comments/1r83brx/volume_scaling_techniques_for_improved_lattice/)
https://www.reddit.com/r/programming/comments/1r83brx/volume_scaling_techniques_for_improved_lattice/
submitted by /u/DataBaeBee (https://www.reddit.com/user/DataBaeBee)
[link] (https://leetarxiv.substack.com/p/guessing-bits-improved-lattice-attacks) [comments] (https://www.reddit.com/r/programming/comments/1r83brx/volume_scaling_techniques_for_improved_lattice/)
Evaluating AGENTS.md: Are Repository-Level Context Files Helpful for Coding Agents?
https://www.reddit.com/r/programming/comments/1r89c8e/evaluating_agentsmd_are_repositorylevel_context/
submitted by /u/mttd (https://www.reddit.com/user/mttd)
[link] (https://arxiv.org/abs/2602.11988) [comments] (https://www.reddit.com/r/programming/comments/1r89c8e/evaluating_agentsmd_are_repositorylevel_context/)
https://www.reddit.com/r/programming/comments/1r89c8e/evaluating_agentsmd_are_repositorylevel_context/
submitted by /u/mttd (https://www.reddit.com/user/mttd)
[link] (https://arxiv.org/abs/2602.11988) [comments] (https://www.reddit.com/r/programming/comments/1r89c8e/evaluating_agentsmd_are_repositorylevel_context/)
Fork, Explore, Commit: OS Primitives for Agentic Exploration (PDF)
https://www.reddit.com/r/programming/comments/1r89rbd/fork_explore_commit_os_primitives_for_agentic/
submitted by /u/congwang (https://www.reddit.com/user/congwang)
[link] (https://arxiv.org/abs/2602.08199) [comments] (https://www.reddit.com/r/programming/comments/1r89rbd/fork_explore_commit_os_primitives_for_agentic/)
https://www.reddit.com/r/programming/comments/1r89rbd/fork_explore_commit_os_primitives_for_agentic/
submitted by /u/congwang (https://www.reddit.com/user/congwang)
[link] (https://arxiv.org/abs/2602.08199) [comments] (https://www.reddit.com/r/programming/comments/1r89rbd/fork_explore_commit_os_primitives_for_agentic/)
The fundamental contradiction of decentralized physical infrastructure
https://www.reddit.com/r/programming/comments/1r8cobb/the_fundamental_contradiction_of_decentralized/
<!-- SC_OFF -->How do you decentralize something that needs permits, power grids, physical security, and regulatory compliance? Turns out: you mostly don't. https://cybernews-node.blogspot.com/2026/02/depins-still-more-decentralized-dream.html <!-- SC_ON --> submitted by /u/No_Fisherman1212 (https://www.reddit.com/user/No_Fisherman1212)
[link] (https://cybernews-node.blogspot.com/2026/02/depins-still-more-decentralized-dream.html) [comments] (https://www.reddit.com/r/programming/comments/1r8cobb/the_fundamental_contradiction_of_decentralized/)
https://www.reddit.com/r/programming/comments/1r8cobb/the_fundamental_contradiction_of_decentralized/
<!-- SC_OFF -->How do you decentralize something that needs permits, power grids, physical security, and regulatory compliance? Turns out: you mostly don't. https://cybernews-node.blogspot.com/2026/02/depins-still-more-decentralized-dream.html <!-- SC_ON --> submitted by /u/No_Fisherman1212 (https://www.reddit.com/user/No_Fisherman1212)
[link] (https://cybernews-node.blogspot.com/2026/02/depins-still-more-decentralized-dream.html) [comments] (https://www.reddit.com/r/programming/comments/1r8cobb/the_fundamental_contradiction_of_decentralized/)
Oral History of Michael J. Flynn
https://www.reddit.com/r/programming/comments/1r8i768/oral_history_of_michael_j_flynn/
submitted by /u/mttd (https://www.reddit.com/user/mttd)
[link] (https://www.youtube.com/watch?v=OD2uE9X9BPs) [comments] (https://www.reddit.com/r/programming/comments/1r8i768/oral_history_of_michael_j_flynn/)
https://www.reddit.com/r/programming/comments/1r8i768/oral_history_of_michael_j_flynn/
submitted by /u/mttd (https://www.reddit.com/user/mttd)
[link] (https://www.youtube.com/watch?v=OD2uE9X9BPs) [comments] (https://www.reddit.com/r/programming/comments/1r8i768/oral_history_of_michael_j_flynn/)
The programming language coding agents perform best in isn’t Python, TypeScript, or Java. It’s the functional programming language Elixir.
https://www.reddit.com/r/programming/comments/1r8nbtz/the_programming_language_coding_agents_perform/
<!-- SC_OFF -->I've felt this myself. Moving to a functional architecture gave my codebase the single largest devprod boost. My take is that FP and its patterns enforce:
- A more efficient representation of the actual system, with less accidental complexity
- Clearer human/AI division of labour
- Structural guardrails that replace unreliable discipline Why? Token efficiency. One line = perfect context In FP, a function signature tells you input type, output type, and in strong FP languages, the side effects (monads!). In OOP, side effects are scattered, the model has to retrieve more context that’s more spread out. That’s context bloat and cognitive load for the model. Agents are excellent at mapping patterns You can think of them as a function: `f(pattern_in, context, constraints) => pattern_out` They compress training data into a world model, then map between representations. So English to Rust is a piece of cake. Not so with novel architecture. Therefore to make the best use of agents, our job becomes defining the high-level patterns. In FP, the functional composition and type signatures ARE the patterns. It’s easier to distinguish the architecture from the lower-level code. Pushes impurity to the edge LLMs write pure functions amazingly well. They’re easy to test and defined entirely by contiguous text. Impure functions’ side effects are harder to test. In my codebase, pure and impure functions are separated into different folders. This way I can direct my attention to only the high-risk changes: I review functional composition (the architecture), edge functions, and test case summaries closely, ignore pure function bodies. FP enforces best practices Purity is default, opt INTO side effects. Immutability is default, opt INTO mutation. Agents are surprisingly lazy. They will use tools however they want. I wrote an MCP tool for agents to create graphs, it kept creating single nodes. So I blocked it if node length was too long, but with an option to override if it read the instructions and explained why. What did Claude do? It didn’t read the instructions, overrode every time with plausible explanations. When I removed the override ability, the behaviour I wanted was enforced, with the small tradeoff of reduced flexibility. FP philosophy. Both myself and LLMs perform better with FP. I don’t think it’s about the specifics of the languages but the emergent architectures it encourages. Would love to hear from engineers who have been using coding agents in FP codebases. <!-- SC_ON --> submitted by /u/manummasson (https://www.reddit.com/user/manummasson)
[link] (https://github.com/Tencent-Hunyuan/AutoCodeBenchmark/) [comments] (https://www.reddit.com/r/programming/comments/1r8nbtz/the_programming_language_coding_agents_perform/)
https://www.reddit.com/r/programming/comments/1r8nbtz/the_programming_language_coding_agents_perform/
<!-- SC_OFF -->I've felt this myself. Moving to a functional architecture gave my codebase the single largest devprod boost. My take is that FP and its patterns enforce:
- A more efficient representation of the actual system, with less accidental complexity
- Clearer human/AI division of labour
- Structural guardrails that replace unreliable discipline Why? Token efficiency. One line = perfect context In FP, a function signature tells you input type, output type, and in strong FP languages, the side effects (monads!). In OOP, side effects are scattered, the model has to retrieve more context that’s more spread out. That’s context bloat and cognitive load for the model. Agents are excellent at mapping patterns You can think of them as a function: `f(pattern_in, context, constraints) => pattern_out` They compress training data into a world model, then map between representations. So English to Rust is a piece of cake. Not so with novel architecture. Therefore to make the best use of agents, our job becomes defining the high-level patterns. In FP, the functional composition and type signatures ARE the patterns. It’s easier to distinguish the architecture from the lower-level code. Pushes impurity to the edge LLMs write pure functions amazingly well. They’re easy to test and defined entirely by contiguous text. Impure functions’ side effects are harder to test. In my codebase, pure and impure functions are separated into different folders. This way I can direct my attention to only the high-risk changes: I review functional composition (the architecture), edge functions, and test case summaries closely, ignore pure function bodies. FP enforces best practices Purity is default, opt INTO side effects. Immutability is default, opt INTO mutation. Agents are surprisingly lazy. They will use tools however they want. I wrote an MCP tool for agents to create graphs, it kept creating single nodes. So I blocked it if node length was too long, but with an option to override if it read the instructions and explained why. What did Claude do? It didn’t read the instructions, overrode every time with plausible explanations. When I removed the override ability, the behaviour I wanted was enforced, with the small tradeoff of reduced flexibility. FP philosophy. Both myself and LLMs perform better with FP. I don’t think it’s about the specifics of the languages but the emergent architectures it encourages. Would love to hear from engineers who have been using coding agents in FP codebases. <!-- SC_ON --> submitted by /u/manummasson (https://www.reddit.com/user/manummasson)
[link] (https://github.com/Tencent-Hunyuan/AutoCodeBenchmark/) [comments] (https://www.reddit.com/r/programming/comments/1r8nbtz/the_programming_language_coding_agents_perform/)