Reddit Programming
212 subscribers
1.22K photos
125K links
I will send you newest post from subreddit /r/programming
Download Telegram
What breaks when you try to put tables, graphs, and vector search in one embedded engine?
https://www.reddit.com/r/programming/comments/1qsbj66/what_breaks_when_you_try_to_put_tables_graphs_and/

<!-- SC_OFF -->I’ve been working on an embedded database engine that runs in-process and supports multiple data models under one transactional system: relational tables, property graphs, and vector similarity search (HNSW-style). Trying to combine these in a single embedded engine surfaces some interesting programming and systems problems that don’t show up when each piece lives in its own service. A few of the more interesting challenges: 1) Transaction semantics vs ANN indexes
Approximate vector indexes like HNSW don’t naturally fit strict ACID semantics.
Per-transaction updates increase write amplification, rollbacks are awkward, and crash recovery becomes complicated. In practice, you have to decide how “transactional” these structures really are. 2) Storage layout tension
Tables want row or column locality.
Graphs want pointer-heavy adjacency structures.
Vectors want contiguous, cache-aligned numeric blocks. You can unify the abstraction layer, but at the physical level these models fight each other unless you introduce specialization, which erodes the “single engine” ideal. 3) Query planning across models
Cross-model queries sound elegant, but cost models don’t compose cleanly.
Graph traversals plus vector search quickly explode the planner search space, and most optimizers end up rule-based rather than cost-based. 4) Runtime embedding costs
Running a full DB engine inside a language runtime (instead of as a service) shifts problems: - startup time vs long-lived processes - memory ownership and GC interaction - crash behavior and isolation expectations Some problems get easier (latency, deployment); others get harder (debugging, failure isolation). The motivation for exploring this design is to avoid stitching together multiple storage systems for local or embedded workloads, but the complexity doesn’t disappear — it just moves. If you’ve worked on database engines, storage systems, or runtime embedding (JVM, CPython, Rust, etc.), I’d be curious: - where would you intentionally draw boundaries between models? - which parts would you relax consistency on first? - does embedded deployment change how you’d design these internals? For concrete implementation context, this exploration is being done using an embedded configuration of ArcadeDB via language bindings. I’m not benchmarking or claiming this is “the right” approach — mostly interested in the engineering trade-offs. <!-- SC_ON --> submitted by /u/Plastic_Director_480 (https://www.reddit.com/user/Plastic_Director_480)
[link] (https://github.com/humemai/arcadedb-embedded-python) [comments] (https://www.reddit.com/r/programming/comments/1qsbj66/what_breaks_when_you_try_to_put_tables_graphs_and/)
The Hardest Bugs Exist Only In Organizational Charts
https://www.reddit.com/r/programming/comments/1qscpr5/the_hardest_bugs_exist_only_in_organizational/

<!-- SC_OFF -->The Hardest Bugs Exist Only in Organizational Charts. Some of the most damaging failures in software systems are not technical bugs but organizational ones, rooted in team structure, ownership gaps, incentives, and communication breakdowns that quietly shape how code behaves. https://techyall.com/blog/the-hardest-bugs-exist-only-in-organizational-charts <!-- SC_ON --> submitted by /u/justok25 (https://www.reddit.com/user/justok25)
[link] (https://techyall.com/blog/the-hardest-bugs-exist-only-in-organizational-charts) [comments] (https://www.reddit.com/r/programming/comments/1qscpr5/the_hardest_bugs_exist_only_in_organizational/)
C3 Programming Language 0.7.9 - migrating away from generic modules
https://www.reddit.com/r/programming/comments/1qsexe1/c3_programming_language_079_migrating_away_from/

<!-- SC_OFF -->C3 is a C alternative for people who like C, see https://c3-lang.org (https://c3-lang.org/). In this release, C3 generics had a refresh. Previously based on the concept of generic modules (somewhat similar to ML generic modules), 0.7.9 presents a superset of that functionality which decouples generics from the module, which still retaining the benefits of being able to specify generic constraints in a single location. Other than this, the release has the usual fixes and improvements to the standard library. This is expected to be one of the last releases in the 0.7.x iteration, with 0.8.0 planned for April (current schedule is one 0.1 release per year, with 1.0 planned for 2028). While 0.8.0 and 0.9.0 all allows for breaking changes, the language is complete as is, and current work is largely about polishing syntax and semantics, as well as filling gaps in the standard library. <!-- SC_ON --> submitted by /u/Nuoji (https://www.reddit.com/user/Nuoji)
[link] (https://c3-lang.org/blog/c3-0-7-9-new-generics-and-new-optional-syntax/) [comments] (https://www.reddit.com/r/programming/comments/1qsexe1/c3_programming_language_079_migrating_away_from/)
The 80% Problem in Agentic Coding | Addy Osmani
https://www.reddit.com/r/programming/comments/1qsexgr/the_80_problem_in_agentic_coding_addy_osmani/

<!-- SC_OFF --> Those same teams saw review times balloon 91%. Code review became the new bottleneck. The time saved writing code was consumed by organizational friction, more context switching, more coordination overhead, managing the higher volume of changes. <!-- SC_ON --> submitted by /u/waozen (https://www.reddit.com/user/waozen)
[link] (https://addyo.substack.com/p/the-80-problem-in-agentic-coding) [comments] (https://www.reddit.com/r/programming/comments/1qsexgr/the_80_problem_in_agentic_coding_addy_osmani/)
Linux's b4 kernel development tool now dog-feeding its AI agent code review helper
https://www.reddit.com/r/programming/comments/1qt63c6/linuxs_b4_kernel_development_tool_now_dogfeeding/

<!-- SC_OFF -->"The b4 tool used by Linux kernel developers to help manage their patch workflow around contributions to the Linux kernel has been seeing work on a text user interface to help with AI agent assisted code reviews. This weekend it successfully was dog feeding with b4 review TUI reviewing patches on the b4 tool itself. Konstantin Ryabitsev with the Linux Foundation and lead developer on the b4 tool has been working on the 'b4 review tui' for a nice text user interface for kernel developers making use of this utility for managing patches and wanting to opt-in to using AI agents like Claude Code to help with code review. With b4 being the de facto tool of Linux kernel developers, baking in this AI assistance will be an interesting option for kernel developers moving forward to augment their workflows with hopefully saving some time and/or catching some issues not otherwise spotted. This is strictly an optional feature of b4 for those actively wanting the assistance of an AI helper." - Phoronix <!-- SC_ON --> submitted by /u/Fcking_Chuck (https://www.reddit.com/user/Fcking_Chuck)
[link] (https://www.phoronix.com/news/Linux-b4-Tool-Dog-Feeding-AI) [comments] (https://www.reddit.com/r/programming/comments/1qt63c6/linuxs_b4_kernel_development_tool_now_dogfeeding/)
How can we integrate an AI learning platform like MOLTBook with robotics to create intelligent robot races and activity-based competitions?
https://www.reddit.com/r/programming/comments/1qtantb/how_can_we_integrate_an_ai_learning_platform_like/

<!-- SC_OFF -->I’ve been thinking about combining an AI-based learning system like MOLTBook with robotics to create something more interactive and hands-on, like robot races and smart activity challenges. Instead of just learning AI concepts on a screen, students could train their own robots using machine learning, computer vision, and sensors. For example, robots could learn to follow lines, avoid obstacles, recognize objects, or make decisions in real time. Then we could organize competitions where robots race or complete tasks using the intelligence they’ve developed — not just pre-written code. The idea is to make robotics more practical and fun. Students wouldn’t just assemble hardware; they would also train AI models, test strategies, and improve performance like a real-world engineering project. Think of it like Formula 1, but for AI-powered robots. This could be great for schools, colleges, and tech institutes because it mixes coding, electronics, and problem-solving into one activity. It also encourages teamwork and innovation. Has anyone here tried building something similar or integrating AI platforms with robotics competitions? I’d love suggestions on tools, hardware, or frameworks to get started. <!-- SC_ON --> submitted by /u/DheMagician (https://www.reddit.com/user/DheMagician)
[link] (http://moltbook.com/) [comments] (https://www.reddit.com/r/programming/comments/1qtantb/how_can_we_integrate_an_ai_learning_platform_like/)
Semantic Compression — why modeling “real-world objects” in OOP often fails
https://www.reddit.com/r/programming/comments/1qtbi2l/semantic_compression_why_modeling_realworld/

<!-- SC_OFF -->Read this after seeing it referenced in a comment thread. It pushes back on the usual “model the real world with classes” approach and explains why it tends to fall apart in practice. The author uses a real C++ example from The Witness editor and shows how writing concrete code first, then pulling out shared pieces as they appear, leads to cleaner structure than designing class hierarchies up front. It’s opinionated, but grounded in actual code instead of diagrams or buzzwords. <!-- SC_ON --> submitted by /u/Digitalunicon (https://www.reddit.com/user/Digitalunicon)
[link] (https://caseymuratori.com/blog_0015) [comments] (https://www.reddit.com/r/programming/comments/1qtbi2l/semantic_compression_why_modeling_realworld/)