Reddit Programming
211 subscribers
1.22K photos
125K links
I will send you newest post from subreddit /r/programming
Download Telegram
The Cost of Certainty: Why Perfect is the Enemy of Scale in Distributed Systems
https://www.reddit.com/r/programming/comments/1qokaan/the_cost_of_certainty_why_perfect_is_the_enemy_of/

<!-- SC_OFF -->Even in 2026, no AI can negotiate with the speed of light. ⚛️ As an architect, I’ve realized our biggest expense isn't compute—it’s the Certainty Tax. We pay a massive premium to pretend the world isn't chaotic, but production is pure entropy. I just wrote a deep dive on why we need to stop chasing 100% consistency at scale. Using Pokémon GO as a sandbox, I audited: The Math: Why adding a sidecar can cost you 22 hours of sleep a year. The Sandbox: Why catch history can lie, but player trading must be painfully slow. The Law: How Little’s Law proves that patience in a concurrent system is a liability. If you’ve ever wrestled with PACELC or consensus algorithms, I’d love to hear your thoughts on where you choose to relax your constraints. <!-- SC_ON --> submitted by /u/Level-Sink3315 (https://www.reddit.com/user/Level-Sink3315)
[link] (https://open.substack.com/pub/qianarthurwang/p/the-cost-of-certainty-why-perfect?r=6wytu0) [comments] (https://www.reddit.com/r/programming/comments/1qokaan/the_cost_of_certainty_why_perfect_is_the_enemy_of/)
4 Pyrefly Type Narrowing Patterns that make Python Type Checking more Intuitive
https://www.reddit.com/r/programming/comments/1qolknv/4_pyrefly_type_narrowing_patterns_that_make/

<!-- SC_OFF -->Since Python is a duck-typed language, programs often narrow types by checking a structural property of something rather than just its class name. For a type checker, understanding a wide variety of narrowing patterns is essential for making it as easy as possible for users to type check their code and reduce the amount of changes made purely to “satisfy the type checker”. In this blog post, we’ll go over some cool forms of narrowing that Pyrefly supports, which allows it to understand common code patterns in Python. To the best of our knowledge, Pyrefly is the only type checker for Python that supports all of these patterns. Contents: 1. hasattr/getattr 2. tagged unions 3. tuple length checks 4. saving conditions in variables Blog post: https://pyrefly.org/blog/type-narrowing/ Github: https://github.com/facebook/pyrefly <!-- SC_ON --> submitted by /u/BeamMeUpBiscotti (https://www.reddit.com/user/BeamMeUpBiscotti)
[link] (https://pyrefly.org/blog/type-narrowing/) [comments] (https://www.reddit.com/r/programming/comments/1qolknv/4_pyrefly_type_narrowing_patterns_that_make/)
The Age of Pump and Dump Software
https://www.reddit.com/r/programming/comments/1qon4yu/the_age_of_pump_and_dump_software/

<!-- SC_OFF -->A new worrying amalgamation of crypto scams and vibe coding emerges from the bowels of the internet in 2026 <!-- SC_ON --> submitted by /u/Gil_berth (https://www.reddit.com/user/Gil_berth)
[link] (https://tautvilas.medium.com/software-pump-and-dump-c8a9a73d313b) [comments] (https://www.reddit.com/r/programming/comments/1qon4yu/the_age_of_pump_and_dump_software/)
Panoptic Segmentation using Detectron2
https://www.reddit.com/r/programming/comments/1qopi7p/panoptic_segmentation_using_detectron2/

<!-- SC_OFF -->For anyone studying Panoptic Segmentation using Detectron2, this tutorial walks through how panoptic segmentation combines instance segmentation (separating individual objects) and semantic segmentation (labeling background regions), so you get a complete pixel-level understanding of a scene. It uses Detectron2’s pretrained COCO panoptic model from the Model Zoo, then shows the full inference workflow in Python: reading an image with OpenCV, resizing it for faster processing, loading the panoptic configuration and weights, running prediction, and visualizing the merged “things and stuff” output. Video explanation: https://youtu.be/MuzNooUNZSY Medium version for readers who prefer Medium : https://medium.com/image-segmentation-tutorials/detectron2-panoptic-segmentation-made-easy-for-beginners-9f56319bb6cc Written explanation with code: https://eranfeit.net/detectron2-panoptic-segmentation-made-easy-for-beginners/ This content is shared for educational purposes only, and constructive feedback or discussion is welcome. Eran Feit <!-- SC_ON --> submitted by /u/Feitgemel (https://www.reddit.com/user/Feitgemel)
[link] (https://eranfeit.net/detectron2-panoptic-segmentation-made-easy-for-beginners/) [comments] (https://www.reddit.com/r/programming/comments/1qopi7p/panoptic_segmentation_using_detectron2/)
JSON vs XML in Embedded Linux - system design trade-offs
https://www.reddit.com/r/programming/comments/1qp07z1/json_vs_xml_in_embedded_linux_system_design/

<!-- SC_OFF -->Data formats define whether systems are lean or bloated. I explored how JSON and XML flow through embedded Linux: - Hardware → Driver → Kernel → Middleware → Application - Real code examples (I²C, sysfs, cJSON, libxml2) - Debugging strategies at every layer - Performance insights: JSON vs XML. Curious how others here approach data structuring in embedded or system-level projects — JSON, XML, or custom formats? <!-- SC_ON --> submitted by /u/ImaginaryPin1768 (https://www.reddit.com/user/ImaginaryPin1768)
[link] (https://codewafer.com/blogs/json-xml-in-embedded-linux-full-stack-guide-with-drivers-middleware/) [comments] (https://www.reddit.com/r/programming/comments/1qp07z1/json_vs_xml_in_embedded_linux_system_design/)
I tried learning compilers by building a language. It got out of hand.
https://www.reddit.com/r/programming/comments/1qp1j5m/i_tried_learning_compilers_by_building_a_language/

<!-- SC_OFF -->Hi all, I wanted to share a personal learning project I’ve been working on called sr-lang. It’s a small programming language and compiler written in Zig, with MLIR as the backend. I started it as a way to learn compiler construction by doing. Zig felt like a great fit, and its style/constraints ended up influencing the language design more than I expected. For context, I’m an ML researcher and I work with GPU-related stuff a lot, which is why you’ll see GPU-oriented experiments show up (e.g. Triton). Over time the project grew as I explored parsing, semantic analysis, type systems, and backend design. Some parts are relatively solid, and others are experimental or rough, which is very much part of the learning process. A bit of honesty up front I’m not a compiler expert. I used LLMs occasionally to explore ideas or unblock iterations. The design decisions and bugs are mine. If something looks awkward or overcomplicated, it probably reflects what I was learning at the time. It did take more than 10 months to get to this point (I'm slow). Some implemented highlights (selected) Parser, AST, and semantic analysis in Zig MLIR-based backend Error unions and defer / errdefer style cleanup Pattern matching and sum types comptime and AST-as-data via code {} blocks Async/await and closures (still evolving) Inline MLIR and asm {} support Triton / GPU integration experiments What’s incomplete Standard library is minimal Diagnostics/tooling and tests need work Some features are experimental and not well integrated yet I’m sharing this because I’d love feedback on design tradeoffs and rough edges help spotting obvious issues (or suggesting better structure) contributors who want low-pressure work (stdlib, tests, docs, diagnostics, refactors) Repo: https://github.com/theunnecessarythings/sr-lang Thanks for reading. Happy to answer questions or take criticism. <!-- SC_ON --> submitted by /u/theunnecessarythings (https://www.reddit.com/user/theunnecessarythings)
[link] (https://github.com/theunnecessarythings/sr-lang) [comments] (https://www.reddit.com/r/programming/comments/1qp1j5m/i_tried_learning_compilers_by_building_a_language/)
Walkthrough of X's algorithm that decides what you see
https://www.reddit.com/r/programming/comments/1qpharr/walkthrough_of_xs_algorithm_that_decides_what_you/

<!-- SC_OFF -->X open-sourced the algorithm behind the For You feed on January 20th (https://github.com/xai-org/x-algorithm). Candidate Retrieval Two sources feed the pipeline: Thunder: an in-memory service holding the last 48 hours of tweets in a DashMap (concurrent HashMap), indexed by author. It serves in-network posts from accounts you follow via gRPC. Phoenix: a two-tower neural network for discovery. User tower is a Grok transformer with mean pooling. Candidate tower is a 2-layer MLP with SiLU. Both L2-normalize, so retrieval is just a dot product over precomputed corpus embeddings. Scoring Phoenix scores all candidates in a single transformer forward pass, predicting 18 engagement probabilities per post - like, reply, retweet, share, block, mute, report, dwell, video completion, etc. To batch efficiently without candidates influencing each other's scores, they use a custom attention mask. Each candidate attends to the user context and itself, but cross-candidate attention is zeroed out. A WeightedScorer combines the 18 predictions into one number. Positive signals (likes, replies, shares) add to the score. Negative signals (blocks, mutes, reports) subtract. Then two adjustments: Author diversity - exponential decay so one author can't dominate your feed. A floor parameter (e.g. 0.3) ensures later posts still have some weight. Out-of-network penalty 0 posts from unfollowed accounts are multiplied by a weight (e.g. 0.7). Filtering 10 pre-filters run before scoring (dedup, age limit, muted keywords, block lists, previously seen posts via Bloom filter). After scoring, a visibility filter queries an external safety service and a conversation dedup filter keeps only the highest-scored post per thread. <!-- SC_ON --> submitted by /u/noninertialframe96 (https://www.reddit.com/user/noninertialframe96)
[link] (https://codepointer.substack.com/p/x-algorithm-how-x-decides-what-550) [comments] (https://www.reddit.com/r/programming/comments/1qpharr/walkthrough_of_xs_algorithm_that_decides_what_you/)