Reddit Programming
210 subscribers
1.22K photos
124K links
I will send you newest post from subreddit /r/programming
Download Telegram
WebFragments: A new approach to micro-frontends (from the co-creator of Angular and Microsoft’s DX lead)
https://www.reddit.com/r/programming/comments/1oeryvj/webfragments_a_new_approach_to_microfrontends/

<!-- SC_OFF -->Hey folks 👋 Just released a new Señors @ Scale episode that I think will interest anyone working on large frontend platforms or micro-frontends. I sat down with Igor Minar (co-creator of Angular, now at Cloudflare) and Natalia Venditto (Principal PM for JavaScript Developer Experience at Microsoft) to talk about WebFragments — a new way to build modular frontends that actually scale. The idea:
→ Each micro-frontend runs in its own isolated JavaScript context (like Docker for the browser)
→ The DOM is virtualized using Shadow DOM, not iframes
→ Fragments stay independent but render as one seamless app
→ It’s framework-agnostic — React, Vue, Qwik, Angular… all work They also shared how Cloudflare is already migrating its production dashboard using WebFragments — incrementally, without breaking the existing platform. <!-- SC_ON --> submitted by /u/creasta29 (https://www.reddit.com/user/creasta29)
[link] (https://www.youtube.com/watch?v=JY2Yjy2020I&list=PLeeGnEj5psFIwWJfpCwnedMsFApK6CvRr) [comments] (https://www.reddit.com/r/programming/comments/1oeryvj/webfragments_a_new_approach_to_microfrontends/)
Ken Thompson's "Trusting Trust" compiler backdoor - Now with the actual source code (2023)
https://www.reddit.com/r/programming/comments/1of2toi/ken_thompsons_trusting_trust_compiler_backdoor/

<!-- SC_OFF -->Ken Thompson's 1984 "Reflections on Trusting Trust" is a foundational paper in supply chain security, demonstrating that trusting source code alone isn't enough - you must trust the entire toolchain. The attack works in three stages: Self-reproduction: Create a program that outputs its own source code (a quine) Compiler learning: Use the compiler's self-compilation to teach it knowledge that persists only in the binary Trojan horse deployment: Inject backdoors that: Insert a password backdoor when compiling login.c Re-inject themselves when compiling the compiler Leave no trace in source code after "training" In 2023, Thompson finally released the actual code (file: nih.a) after Russ Cox asked for it. I wrote a detailed walkthrough with the real implementation annotated line-by-line. Why this matters for modern security: Highlights the limits of source code auditing Foundation for reproducible builds initiatives (Debian, etc.) Relevant to current supply chain attacks (SolarWinds, XZ Utils) Shows why diverse double-compiling (DDC) is necessary The backdoor password was "codenih" (NIH = "not invented here"). Thompson confirmed it was built as a proof-of-concept but never deployed in production. <!-- SC_ON --> submitted by /u/fizzner (https://www.reddit.com/user/fizzner)
[link] (https://micahkepe.com/blog/thompson-trojan-horse/) [comments] (https://www.reddit.com/r/programming/comments/1of2toi/ken_thompsons_trusting_trust_compiler_backdoor/)
Benchmarks for a distributed key-value store
https://www.reddit.com/r/programming/comments/1ofi1wt/benchmarks_for_a_distributed_keyvalue_store/

<!-- SC_OFF -->Hey folks I’ve been working on a project called SevenDB (https://github.com/sevenDatabase/SevenDB) — it’s a reactive database( or rather a distributed key-value store) focused on determinism and predictable replication (Raft-based), we have completed out work with raft , durable subscriptions , emission contract etc , now it is the time to showcase the work. I’m trying to put together a fair and transparent benchmarking setup to share the performance numbers. If you were evaluating a new system like this, what benchmarks would you consider meaningful? i know raw throughput is good , but what are the benchmarks i should run and show to prove the utility of the database? I just want to design a solid test suite that would make sense to people who know this stuff better than I do. As the work is open source and the adoption would be highly dependent on what benchmarks we show and how well we perform in them Curious to hear what kind of metrics or experiments make you take a new DB seriously. <!-- SC_ON --> submitted by /u/shashanksati (https://www.reddit.com/user/shashanksati)
[link] (https://github.com/sevenDatabase/SevenDB) [comments] (https://www.reddit.com/r/programming/comments/1ofi1wt/benchmarks_for_a_distributed_keyvalue_store/)
What are Monads?
https://www.reddit.com/r/programming/comments/1ofijln/what_are_monads/

<!-- SC_OFF -->I am a wanna-be youtuber-ish. Could you guys please review of what can I actually improve in this video. https://youtu.be/nH4rnr5Xk6g Thanks in Advance. <!-- SC_ON --> submitted by /u/Tasty-Series3748 (https://www.reddit.com/user/Tasty-Series3748)
[link] (https://youtu.be/nH4rnr5Xk6g) [comments] (https://www.reddit.com/r/programming/comments/1ofijln/what_are_monads/)