Reddit Programming
201 subscribers
1.22K photos
126K links
I will send you newest post from subreddit /r/programming
Download Telegram
Peer-reviewed study: AI-generated changes fail more often in unhealthy code (30%+ higher defect risk)
https://www.reddit.com/r/programming/comments/1r70jbb/peerreviewed_study_aigenerated_changes_fail_more/

<!-- SC_OFF -->We recently published research, “Code for Machines, Not Just Humans: Quantifying AI-Friendliness with Code Health Metrics.” In the study, we analyzed AI-generated refactorings across 5,000 real programs using six different LLMs. We measured whether the changes preserved behavior while keeping tests passing. One result stood out: AI-generated changes failed significantly more often in unhealthy code, with defect risk increasing by at least 30%. Some important nuance: The study only included code with Code Health ≥ 7.0. Truly low-quality legacy modules (scores 4, 3, or 1) were not included. The 30% increase was observed in code that was still relatively maintainable. Based on prior Code Health research, breakage rates in deeply unhealthy legacy systems are likely non-linear and could increase steeply. The paper argues that Code Health is a key factor in whether AI coding assistants accelerate development or amplify defect risk. The traditional maxim says code must be written for humans to read. With AI increasingly modifying code, it may also need to be structured in ways machines can reliably interpret. Our data suggests AI performance is tightly coupled to the structural health of the system it’s applied to: Healthy code → AI behaves more predictably Unhealthy code → defect rates rise sharply This mirrors long-standing findings about human defect rates in complex systems. Are you seeing different AI outcomes depending on which parts of the codebase the model touches? Disclosure: I work at CodeScene (the company behind the study). I’m not one of the authors, but I wanted to share the findings here for discussion. If useful, we’re also hosting a technical session next week to go deeper into the methodology and architectural implications, happy to share details. <!-- SC_ON --> submitted by /u/Summer_Flower_7648 (https://www.reddit.com/user/Summer_Flower_7648)
[link] (https://codescene.com/hubfs/whitepapers/AI-Ready-Code-How-Code-Health-Determines-AI-Performance.pdf) [comments] (https://www.reddit.com/r/programming/comments/1r70jbb/peerreviewed_study_aigenerated_changes_fail_more/)
Webinar on how to build your own programming language in C++ from the developers of a static analyzer
https://www.reddit.com/r/programming/comments/1r76yj2/webinar_on_how_to_build_your_own_programming/

<!-- SC_OFF -->PVS-Studio presents a series of webinars on how to build your own programming language in C++. In the first session, PVS-Studio will go over what's inside the "black box". In clear and plain terms, they'll explain what a lexer, parser, a semantic analyzer, and an evaluator are. Yuri Minaev, C++ architect at PVS-Studio, will talk about what these components are, why they're needed, and how they work. Welcome to join (https://pvs-studio.com/en/webinar/23/?utm_source=reddit) <!-- SC_ON --> submitted by /u/Xadartt (https://www.reddit.com/user/Xadartt)
[link] (https://pvs-studio.com/en/webinar/23/?utm_source=reddit) [comments] (https://www.reddit.com/r/programming/comments/1r76yj2/webinar_on_how_to_build_your_own_programming/)
Pytorch Now Uses Pyrefly for Type Checking
https://www.reddit.com/r/programming/comments/1r777dn/pytorch_now_uses_pyrefly_for_type_checking/

<!-- SC_OFF -->From the official Pytorch blog: We’re excited to share that PyTorch now leverages Pyrefly to power type checking across our core repository, along with a number of projects in the PyTorch ecosystem: Helion, TorchTitan and Ignite. For a project the size of PyTorch, leveraging typing and type checking has long been essential for ensuring consistency and preventing common bugs that often go unnoticed in dynamic code. Migrating to Pyrefly brings a much needed upgrade to these development workflows, with lightning-fast, standards-compliant type checking and a modern IDE experience. With Pyrefly, our maintainers and contributors can catch bugs earlier, benefit from consistent results between local and CI runs, and take advantage of advanced typing features. In this blog post, we’ll share why we made this transition and highlight the improvements PyTorch has already experienced since adopting Pyrefly. Full blog post: https://pytorch.org/blog/pyrefly-now-type-checks-pytorch/ <!-- SC_ON --> submitted by /u/BeamMeUpBiscotti (https://www.reddit.com/user/BeamMeUpBiscotti)
[link] (https://pytorch.org/blog/pyrefly-now-type-checks-pytorch/) [comments] (https://www.reddit.com/r/programming/comments/1r777dn/pytorch_now_uses_pyrefly_for_type_checking/)
Effortless repository-based session history organization for DeepWiki
https://www.reddit.com/r/programming/comments/1r7ahk7/effortless_repositorybased_session_history/

<!-- SC_OFF -->When using DeepWiki extensively across multiple OSS repositories, search sessions can quickly pile up, making it hard to keep track of context per repo. To help with this workflow issue, this desktop application wraps DeepWiki in a WebView, tracks URL changes, and groups sessions by repository automatically. Features Display of repositories and their sessions By automatic tracking of DeepWiki URL changes Right-click context menu for easy deletion of repositories and sessions from UI Also renames the sessions for clarity Check for updates to notify users when a new version is available <!-- SC_ON --> submitted by /u/aqny (https://www.reddit.com/user/aqny)
[link] (https://github.com/ynqa/dwb) [comments] (https://www.reddit.com/r/programming/comments/1r7ahk7/effortless_repositorybased_session_history/)
Best way to persist an in-memory cache in Go? (gob vs mmap vs flatbuffers)
https://www.reddit.com/r/programming/comments/1r7v9wx/best_way_to_persist_an_inmemory_cache_in_go_gob/

<!-- SC_OFF -->Building a semantic cache in Go. ~10K entries, each is a string key + 1.25 KB binary vector + cached value. ~15 MB total. Works great in-memory but every restart means a cold cache. I want: fast startup ( Options I'm weighing: encoding/gob -dump to file on shutdown, load on start. Zero deps, dead simple. Fast enough for 15 MB? mmap - memory-map the file, writes hit disk automatically. Fast but feels like overkill for this size? FlatBuffers/protobuf - faster decode than gob, stable wire format. Worth adding a dep? SQLite - mature, crash-safe. But is it overkilll for a cache? Anyone have experience with gob at this scale? Is mmap worth the complexity, or am I overthinking a 15 MB file? Other patterns I'm not seeing? <!-- SC_ON --> submitted by /u/Sweet-Demand-7971 (https://www.reddit.com/user/Sweet-Demand-7971)
[link] (https://www.reddit.com/r/golang/comments/1em9u6t/json_vs_flatbuffers_vs_protocol_buffers/) [comments] (https://www.reddit.com/r/programming/comments/1r7v9wx/best_way_to_persist_an_inmemory_cache_in_go_gob/)
The fundamental contradiction of decentralized physical infrastructure
https://www.reddit.com/r/programming/comments/1r8cobb/the_fundamental_contradiction_of_decentralized/

<!-- SC_OFF -->How do you decentralize something that needs permits, power grids, physical security, and regulatory compliance? Turns out: you mostly don't. https://cybernews-node.blogspot.com/2026/02/depins-still-more-decentralized-dream.html <!-- SC_ON --> submitted by /u/No_Fisherman1212 (https://www.reddit.com/user/No_Fisherman1212)
[link] (https://cybernews-node.blogspot.com/2026/02/depins-still-more-decentralized-dream.html) [comments] (https://www.reddit.com/r/programming/comments/1r8cobb/the_fundamental_contradiction_of_decentralized/)