Reddit Programming
210 subscribers
1.22K photos
124K links
I will send you newest post from subreddit /r/programming
Download Telegram
zkTLS for Verifiable HTTP — Stop Blindly Trusting AI Agents & Oracles
https://www.reddit.com/r/programming/comments/1o5bxsd/zktls_for_verifiable_http_stop_blindly_trusting/

<!-- SC_OFF -->When you’re vibe-coding with LLMs, you often heard: LLMs say:
I sent the request.”
Oracles say:
This is the real data.” But… how do you verify that actually happened?
You don’t. You just blindly trust. 😬 And this isn’t just an LLM problem — humans do this too.
Without proof, trust is fragile. That's why we build VEFAS (Verifiable Execution Framework for AI Agents) changes that.
We use zkTLS to turn any HTTP(S) request into a cryptographic proof: At time T, I sent request X to URL Y over real TLS and got response Z. No notaries No trusted gateways Anyone can verify the proof This is the first layer of a bigger verifiable AI stack.
The project is open source, under heavy development, and we’re inviting devs, cryptographers, and AI builders to help push this forward. <!-- SC_ON --> submitted by /u/bryanlee9889 (https://www.reddit.com/user/bryanlee9889)
[link] (https://github.com/Off-Live/vefas) [comments] (https://www.reddit.com/r/programming/comments/1o5bxsd/zktls_for_verifiable_http_stop_blindly_trusting/)
Tests Don’t Prove Code Is Correct… They Just Agree With It
https://www.reddit.com/r/programming/comments/1o5rh32/tests_dont_prove_code_is_correct_they_just_agree/

<!-- SC_OFF -->“A test isn’t proof that something is correct, it’s proof that one piece of code behaves the way another piece of code thinks it should behave.” This thought hit me the other day while writing a few “perfectly passing” tests. I realized they weren’t actually proving anything — just confirming that my assumptions in two places matched. When both your implementation and your test share the same wrong assumption, everything still passes. Green checkmarks, false confidence. It made me rethink what tests are even for. They’re not really about proving truth — more about locking down intent. A way to say, “If I ever change this behavior, I want to know.” The tricky part is that the intent itself can be wrong. Anyway, just a random reflection from too many late nights chasing 100% coverage. Curious how you all think about it — do you see tests as validation, documentation, or just guardrails to keep chaos in check? <!-- SC_ON --> submitted by /u/untypedfuture (https://www.reddit.com/user/untypedfuture)
[link] (https://medium.com/@arnonaxelrod/proof-driven-development-or-the-business-value-of-clean-code-b84380ff312e) [comments] (https://www.reddit.com/r/programming/comments/1o5rh32/tests_dont_prove_code_is_correct_they_just_agree/)
AI Won’t Fix Broken Systems: Lessons from the 2025 DORA Report
https://www.reddit.com/r/programming/comments/1o5wxpz/ai_wont_fix_broken_systems_lessons_from_the_2025/

<!-- SC_OFF -->Faster coding doesn’t always mean increased productivity. <!-- SC_ON --> submitted by /u/aviator_co (https://www.reddit.com/user/aviator_co)
[link] (https://www.aviator.co/blog/ai-2025-dora-report/) [comments] (https://www.reddit.com/r/programming/comments/1o5wxpz/ai_wont_fix_broken_systems_lessons_from_the_2025/)
The Story of Codesmith: How a Competitor Crippled a $23.5M Bootcamp By Becoming a Reddit Moderator
https://www.reddit.com/r/programming/comments/1o67ip8/the_story_of_codesmith_how_a_competitor_crippled/

<!-- SC_OFF -->Saw this on theprimeagen stream, thought it would be interested to share. Anyone here who did a codesmith bootcamp? <!-- SC_ON --> submitted by /u/Happy_Junket_9540 (https://www.reddit.com/user/Happy_Junket_9540)
[link] (https://larslofgren.com/codesmith-reddit-reputation-attack/) [comments] (https://www.reddit.com/r/programming/comments/1o67ip8/the_story_of_codesmith_how_a_competitor_crippled/)
Introducing Reactive Programming for Go
https://www.reddit.com/r/programming/comments/1o6ieoy/introducing_reactive_programming_for_go/

<!-- SC_OFF -->Start writing declarative pipelines: observable := ro.Pipe( ro.RangeWithInterval(0, 10, 1*time.Second), ro.Filter(func(x int) bool { return x%2 == 0 }), ro.Map(func(x int) string { return fmt.Sprintf("even-%d", x) }), ) <!-- SC_ON --> submitted by /u/samuelberthe (https://www.reddit.com/user/samuelberthe)
[link] (https://github.com/samber/ro) [comments] (https://www.reddit.com/r/programming/comments/1o6ieoy/introducing_reactive_programming_for_go/)
How Clean Commits Make PR Reviews Easier
https://www.reddit.com/r/programming/comments/1o6rbo1/how_clean_commits_make_pr_reviews_easier/

<!-- SC_OFF -->It's no secret that reviewing pull requests is time consuming, and incredibly important. Speeding up reviews, and enabling higher quality reviews, should therefore be a crucial skill for all developers. However, I find the vast majority of PRs to be incredibly unfriendly to reviewers. In this (https://medium.com/@anujbiyani/ai-development-how-clean-commits-make-pr-reviews-easier-ec33f57eda70?source=friends_link&sk=4f1308bb6693f47236fb0da87bef3454) post I wrote about some git commands that will help you craft PRs that are much easier to review. With a bit of practice it ends up being fairly quick to execute on, and your whole team will thank you. <!-- SC_ON --> submitted by /u/fogeyman (https://www.reddit.com/user/fogeyman)
[link] (https://medium.com/@anujbiyani/ai-development-how-clean-commits-make-pr-reviews-easier-ec33f57eda70?source=friends_link&sk=4f1308bb6693f47236fb0da87bef3454) [comments] (https://www.reddit.com/r/programming/comments/1o6rbo1/how_clean_commits_make_pr_reviews_easier/)