Reddit Programming
201 subscribers
1.22K photos
126K links
I will send you newest post from subreddit /r/programming
Download Telegram
<!-- SC_OFF -->Been going back and forth on this for a while. The common wisdom these days is "just use Virtual Threads, reactive is dead", and honestly it's hard to argue against the DX argument. But I kept having this nagging feeling that for workloads mixing I/O and heavy CPU (think: DB query -> BCrypt verify -> JWT sign), the non-blocking model might still have an edge that wasn't showing up in the benchmarks I could find. The usual suspects all had blind spots for my use case: TechEmpower (https://github.com/TechEmpower/FrameworkBenchmarks) is great but it's raw CRUD throughput, chrisgleissner's loom-webflux-benchmarks (https://github.com/chrisgleissner/loom-webflux-benchmarks) (probably the most rigorous comparison out there) simulates DB latency with artificial delays rather than real BCrypt, and the Baeldung article on the topic (https://www.baeldung.com/java-reactor-webflux-vs-virtual-threads) is purely theoretical. None of them tested "what happens when your event-loop is free during the DB wait, but then has to chew through 100ms of BCrypt right after". So I built two identical implementations of a Spring Boot account service and hammered them with k6. The setup Stack A: Spring WebFlux + R2DBC + Netty Stack B: Spring MVC + Virtual Threads + JDBC + Tomcat i9-13900KF, 64GB DDR5, OpenJDK 25.0.2 (Temurin), PostgreSQL local with Docker 50 VUs, 2-minute steady state, runs sequential (no resource sharing between the two) 50/50 deterministic VU split between two scenarios Scenario 1 - Pure CPU: BCrypt hash (cost=10), zero I/O WebFlux offloads to Schedulers.boundedElastic() so it doesn't block the event-loop. VT just runs directly on the virtual thread. WebFlux VT median 62ms 55ms p(95) 69ms 71ms max 88ms 125ms Basically a draw. VT wins slightly on median because there's no dispatch overhead. WebFlux wins on max because boundedElastic() has a larger pool to absorb spikes when 50 threads are all doing BCrypt simultaneously. Nothing surprising here, BCrypt monopolizes a full thread in both models, no preemption possible in Java. Scenario 2 - Real login: SELECT + BCrypt verify + JWT sign WebFlux VT median 80ms 96ms p(90) 89ms 110ms p(95) 94ms 118ms max 221ms 245ms WebFlux wins consistently, βˆ’20% on p(95). The gap is stable across all percentiles. My read on why: R2DBC releases the event-loop immediately during the SELECT, so the thread is free for other requests while waiting on Postgres. With JDBC+VT, the virtual thread does get unmounted from its carrier thread during the blocking call, but the remounting + synchronization afterward adds a few ms. BCrypt then runs right after, so that small overhead gets amplified consistently on every single request. Small note: VT actually processed 103 more requests than WebFlux in that scenario (+0.8%) while showing higher latency, which rules out "WebFlux wins because it was under less pressure". The 24ms gap is real. Overall throughput: 123 vs 121 req/s. Zero errors on both sides. Caveats (and I think these matter): Local DB, same machine. With real network latency, R2DBC's advantage would likely be more pronounced since there's more time freed on the event-loop per request Only 50 VUs, at 500+ VUs the HikariCP pool saturation would probably widen the gap further Single run each, no confidence intervals BCrypt is a specific proxy for "heavy CPU", other CPU-bound ops might behave differently Takeaway If your service is doing "I/O wait then heavy CPU" in a tight loop, the reactive model still has a measurable latency advantage at moderate load, even in 2026. If it's pure CPU or light I/O, Virtual Threads are equivalent and the simpler programming model wins hands down. Full report + methodology + raw k6 JSON: https://gitlab.com/RobinTrassard/codenames-microservices/-/blob/account-java-version/load-tests/results/BENCHMARK_REPORT.md <!-- SC_ON --> submitted by /u/Lightforce_ (https://www.reddit.com/user/Lightforce_)
I ported Daniel Lemire's fast_float to c99
https://www.reddit.com/r/programming/comments/1rnswm5/i_ported_daniel_lemires_fast_float_to_c99/

<!-- SC_OFF -->It's a single-header drop in that's been exhaustively tested against the fast_float suite and is benchmarking slightly faster than the cpp. simd, portable, single-header, no allocation, so hot right now <!-- SC_ON --> submitted by /u/foobear777 (https://www.reddit.com/user/foobear777)
[link] (http://github.com/kolemannix/ffc.h) [comments] (https://www.reddit.com/r/programming/comments/1rnswm5/i_ported_daniel_lemires_fast_float_to_c99/)
Looking for textbookπŸ“š: Finite Automata and Formal Languages: A Simple Approach, by A. M. Padma Reddy, published by Pearson Education India. πŸ“š
https://www.reddit.com/r/programming/comments/1rnu1d1/looking_for_textbook_finite_automata_and_formal/

<!-- SC_OFF -->Hi everyone, My university syllabus for Theory of Computation / Automata Theory recommends the book: Finite Automata and Formal Languages: A Simple Approach β€” A. M. Padma Reddy Has anyone here used this book before or know where I could: β€’ access a legal PDF or ebook
β€’ borrow it through a digital library
β€’ find lecture notes or alternative books that cover the same topics If not, I'd also appreciate recommendations for good alternative textbooks covering: Module I: Introduction to Finite Automata Central Concepts of Automata Theory Deterministic Finite Automata (DFA) Nondeterministic Finite Automata (NFA) Applications of Finite Automata Finite Automata with Ξ΅-Transitions Module II: Regular Expressions Regular Languages Properties Module III: Properties of Regular Languages Context-Free Grammars Module IV: Pushdown Automata Context-Free Languages Module V: Turing Machines Undecidability Any help or recommendations would be appreciated. Thanks! πŸ™ Thanks in advance! πŸ“š <!-- SC_ON --> submitted by /u/Broad-Ad2003 (https://www.reddit.com/user/Broad-Ad2003)
[link] (https://www.google.com/search?q=Finite+Automata+and+Formal+Languages%3A+A+Simple+Approach%2C+by+A.+M.+Padma+Reddy&rlz=1C1VDKB_enIN1111IN1112&oq=Finite+Automata+and+Formal+Languages%3A+A+Simple+Approach%2C+by+A.+M.+Padma+Reddy&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIGCAEQRRhBMgYIAhBFGEEyBggDEEUYPDIGCAQQRRhB0gEHNjQzajBqN6gCALACAA&sourceid=chrome&ie=UTF-8) [comments] (https://www.reddit.com/r/programming/comments/1rnu1d1/looking_for_textbook_finite_automata_and_formal/)
Learn Observer Pattern the Easy Way | Java Design Patterns
https://www.reddit.com/r/programming/comments/1ro52s1/learn_observer_pattern_the_easy_way_java_design/

<!-- SC_OFF -->Documenting my journey learning design patterns. Today: Observer Pattern Simple explanation + example. Would love feedback from devs here. <!-- SC_ON --> submitted by /u/Big-Conflict-2600 (https://www.reddit.com/user/Big-Conflict-2600)
[link] (https://youtu.be/SxTmZ5QLTEc) [comments] (https://www.reddit.com/r/programming/comments/1ro52s1/learn_observer_pattern_the_easy_way_java_design/)
Building a strict RFC 8259 JSON parser: what most parsers silently accept and why it matters for deterministic systems
https://www.reddit.com/r/programming/comments/1rp0zl4/building_a_strict_rfc_8259_json_parser_what_most/

<!-- SC_OFF -->Most JSON parsers make deliberate compatibility choices: lone surrogates get replaced, duplicate keys get silently resolved, and non-zero numbers that underflow to IEEE 754 zero are accepted without error. These are reasonable defaults for application code. They become correctness failures when the parsed JSON feeds a system that hashes, signs, or compares by raw bytes. If two parsers handle the same malformed input differently, the downstream bytes diverge, the hash diverges, and the signature fails. This article walks through building a strict RFC 8259 parser in Go that rejects what lenient parsers silently accept. It covers UTF-8 validation in two passes (bulk upfront, then incremental for semantic constraints like noncharacter rejection and surrogate detection on decoded code points), surrogate pair handling where lone surrogates are rejected per RFC 7493 while valid pairs are decoded and reassembled, duplicate key detection after escape decoding (because "\u0061" and "a" are the same key), number grammar enforcement in four layers (leading zeros, missing fraction digits, lexical negative zero, and overflow/underflow detection), and seven independent resource bounds for denial-of-service protection on untrusted input. The parser exists because canonicalization requires a one-to-one mapping between accepted input and canonical output. Silent leniency breaks that mapping. The article includes the actual implementation code for each section. <!-- SC_ON --> submitted by /u/UsrnameNotFound-404 (https://www.reddit.com/user/UsrnameNotFound-404)
[link] (https://lattice-substrate.github.io/blog/2026/02/26/strict-rfc8259-json-parser/) [comments] (https://www.reddit.com/r/programming/comments/1rp0zl4/building_a_strict_rfc_8259_json_parser_what_most/)