Reddit Programming
211 subscribers
1.22K photos
126K links
I will send you newest post from subreddit /r/programming
Download Telegram
7,432 pages of legacy docs to 3s queries with hybrid search + reranking
https://www.reddit.com/r/programming/comments/1qm7u5z/7432_pages_of_legacy_docs_to_3s_queries_with/

<!-- SC_OFF -->Built a RAG system for 20-year-old Essbase documentation. Hybrid retrieval (BM25 + vector search) with FlashRank reranking. Validated across 4 LLM families to avoid vendor lock-in. 170 seconds to index, 3 second queries, $20/year operating cost. Wrote about how it works. <!-- SC_ON --> submitted by /u/antidrugue (https://www.reddit.com/user/antidrugue)
[link] (https://clouatre.ca/posts/rag-legacy-systems/) [comments] (https://www.reddit.com/r/programming/comments/1qm7u5z/7432_pages_of_legacy_docs_to_3s_queries_with/)
Building a lightning-fast highly-configurable Rust-based backtesting system
https://www.reddit.com/r/programming/comments/1qmdzue/building_a_lightningfast_highlyconfigurable/

<!-- SC_OFF -->I created a very detailed technical design doc for how I built a Rust-based algorithmic trading platform. Feel free to ask me any questions below! <!-- SC_ON --> submitted by /u/ReplacementNo598 (https://www.reddit.com/user/ReplacementNo598)
[link] (https://nexustrade.io/blog/building-a-lightning-fast-highly-configurable-rust-based-backtesting-system-20260119) [comments] (https://www.reddit.com/r/programming/comments/1qmdzue/building_a_lightningfast_highlyconfigurable/)
I got tired of manual priority weights in proxies so I used a Reverse Radix Tree instead
https://www.reddit.com/r/programming/comments/1qmhw95/i_got_tired_of_manual_priority_weights_in_proxies/

<!-- SC_OFF -->Most reverse proxies like Nginx or Traefik handle domain rules in the order you write them or by using those annoying "priority" tags. If you have overlapping wildcards, like *.myapp.test and api.myapp.test, you usally have to play "Priority Tetris" to make sure the right rule wins. I wanted something more deterministic and intuitive. I wanted a system where the most specific match always wins without me having to tinker with config weights every time I add a subdomain. I ended up building a Reverse Radix Tree. The basic idea is that domain hierarchy is actualy right to left: test -> myapp -> api. By splitting the domain by the dots and reversing the segments before putting them in the tree, the data structure finaly matches the way DNS actually works. To handle cases where multiple patterns might match (like api-* vs *), I added a "Literal Density" score. The resolver counts how many non-wildcard characters are in a segment and tries the "densest" (most specific) ones first. This happens naturaly as you walk down the tree, so the hierarchy itself acts as a filter. I wrote a post about the logic, how the scoring works, and how I use named parameters to hydrate dynamic upstreams: https://getlode.app/blog/2026-01-25-stop-playing-priority-tetris How do you guys handle complex wildcard routing? Do you find manual weights a necesary evil or would you prefer a hierarchical approach like this? <!-- SC_ON --> submitted by /u/robbiedobbie (https://www.reddit.com/user/robbiedobbie)
[link] (https://getlode.app/blog/2026-01-25-stop-playing-priority-tetris) [comments] (https://www.reddit.com/r/programming/comments/1qmhw95/i_got_tired_of_manual_priority_weights_in_proxies/)
Been following the metadata management space for work reasons and came across an interesting design problem that Apache Gravitino tried to solve in their 1.1 release. The problem: we have like 5+ different table formats now (Iceberg, Delta Lake, Hive, Hudi, now Lance for vectors) and each has its
https://www.reddit.com/r/programming/comments/1qmkv8f/been_following_the_metadata_management_space_for/

<!-- SC_OFF -->Been following the metadata management space for work reasons and came across an interesting design problem that Apache Gravitino tried to solve in their 1.1 release. The problem: we have like 5+ different table formats now (Iceberg, Delta Lake, Hive, Hudi, now Lance for vectors) and each has its own catalog implementation, its own way of handling namespaces, and its own capability negotiation. If you want to build a unified metadata layer across all of them, you end up writing tons of boilerplate code for each new format. Their solution was to create a generic lakehouse catalog framework that abstracts away the format-specific stuff. The idea is you define a standard interface for how catalogs should negotiate capabilities and handle namespaces, then each format implementation just fills in the blanks. What caught my attention was the trade-off discussion. On one hand, abstractions add complexity and sometimes leak. On the other hand, the lakehouse ecosystem is adding new formats constantly. Without this kind of framework, every new format means rewriting similar integration code. From a software design perspective, this reminded me of the adapter pattern but at a larger scale. The challenge is figuring out what belongs in the abstract interface vs what's genuinely format-specific. Has anyone here dealt with similar unification problems? Like building a common interface across multiple storage backends or database types? Curious how you decided where to draw the abstraction boundary. Link to the release notes if anyone wants to dig into specifics: [https://github.com/apache/gravitino/releases/tag/v1.1.0\](https://github.com/apache/gravitino/releases/tag/v1.1.0) (https://github.com/apache/gravitino/releases/tag/v1.1.0%5C%5D(https://github.com/apache/gravitino/releases/tag/v1.1.0)) <!-- SC_ON --> submitted by /u/Agitated_Fox2640 (https://www.reddit.com/user/Agitated_Fox2640)
[link] (https://github.com/apache/gravitino/releases/tag/v1.1.0) [comments] (https://www.reddit.com/r/programming/comments/1qmkv8f/been_following_the_metadata_management_space_for/)
C++ RAII guard to detect heap allocations in scopes
https://www.reddit.com/r/programming/comments/1qmo44v/c_raii_guard_to_detect_heap_allocations_in_scopes/

<!-- SC_OFF -->Needed a lightweight way to catch heap allocations in cpp, couldn’t find anything simple, so I built this. Sharing in case it helps anyone <!-- SC_ON --> submitted by /u/North_Chocolate7370 (https://www.reddit.com/user/North_Chocolate7370)
[link] (https://github.com/mkslge/noalloc-cpp) [comments] (https://www.reddit.com/r/programming/comments/1qmo44v/c_raii_guard_to_detect_heap_allocations_in_scopes/)
The open-source React calendar inspired by macOS Calendar – DayFlow
https://www.reddit.com/r/programming/comments/1qn1gmz/the_opensource_react_calendar_inspired_by_macos/

<!-- SC_OFF -->Hi everyone 👋 I’d like to share DayFlow, an open-source full-calendar component for the web that I’ve been building over the past year. I’m a heavy macOS Calendar user, and when I was looking for a clean, modern calendar UI on GitHub (especially one that works well with Tailwind / shadcn-ui), I couldn’t find something that fully matched my needs. So I decided to build one myself. What DayFlow focuses on: Clean, modern calendar UI inspired by macOS Calendar Built with React, designed for modern web apps Easy to integrate with shadcn-ui and other Tailwind UI libraries Modular architecture (views, events, panels are customizable) Actively working on i18n support The project is fully open source, and I’d really appreciate: Feedback on the API & architecture Feature suggestions Bug reports Or PRs if you’re interested in contributing GitHub: ** (https://github.com/dayflow-js/calendar)https://github.com/dayflow-js/calendar\*\* (https://github.com/dayflow-js/calendar%5C*%5C*) Demo: ** (https://dayflow-js.github.io/calendar/)https://dayflow-js.github.io/calendar/\*\* (https://dayflow-js.github.io/calendar/%5C*%5C*) Thanks for reading, and I’d love to hear your thoughts 🙏 <!-- SC_ON --> submitted by /u/Cultural_Mission_482 (https://www.reddit.com/user/Cultural_Mission_482)
[link] (https://dayflow-js.github.io/calendar/) [comments] (https://www.reddit.com/r/programming/comments/1qn1gmz/the_opensource_react_calendar_inspired_by_macos/)
Locale-dependent case conversion bugs persist (Kotlin as a real-world example)
https://www.reddit.com/r/programming/comments/1qndjes/localedependent_case_conversion_bugs_persist/

<!-- SC_OFF -->Case-insensitive logic can fail in surprising ways when string case conversion depends on the ambient locale. Many programs assume that operations like ToLower() or ToUpper() are locale-neutral, but in reality their behavior can vary by system settings. This can lead to subtle bugs, often involving the well-known “Turkish I” casing rules, where identifiers, keys, or comparisons stop working correctly outside en-US environments. The Kotlin compiler incident linked here is a concrete, real-world example of this broader class of locale-dependent case conversion bugs. <!-- SC_ON --> submitted by /u/BoloFan05 (https://www.reddit.com/user/BoloFan05)
[link] (https://sam-cooper.medium.com/the-country-that-broke-kotlin-84bdd0afb237) [comments] (https://www.reddit.com/r/programming/comments/1qndjes/localedependent_case_conversion_bugs_persist/)