Autovacuum Tuning: Stop Table Bloat Before It Hurts
https://www.reddit.com/r/programming/comments/1maqe0l/autovacuum_tuning_stop_table_bloat_before_it_hurts/
<!-- SC_OFF -->https://medium.com/@rohansodha10/autovacuum-tuning-stop-table-bloat-before-it-hurts-0e39510d0804?sk=57defbd7f909a121b958ea4a536c7f81 <!-- SC_ON --> submitted by /u/Temporary_Depth_2491 (https://www.reddit.com/user/Temporary_Depth_2491)
[link] (https://medium.com/@rohansodha10/autovacuum-tuning-stop-table-bloat-before-it-hurts-0e39510d0804?sk=57defbd7f909a121b958ea4a536c7f81) [comments] (https://www.reddit.com/r/programming/comments/1maqe0l/autovacuum_tuning_stop_table_bloat_before_it_hurts/)
https://www.reddit.com/r/programming/comments/1maqe0l/autovacuum_tuning_stop_table_bloat_before_it_hurts/
<!-- SC_OFF -->https://medium.com/@rohansodha10/autovacuum-tuning-stop-table-bloat-before-it-hurts-0e39510d0804?sk=57defbd7f909a121b958ea4a536c7f81 <!-- SC_ON --> submitted by /u/Temporary_Depth_2491 (https://www.reddit.com/user/Temporary_Depth_2491)
[link] (https://medium.com/@rohansodha10/autovacuum-tuning-stop-table-bloat-before-it-hurts-0e39510d0804?sk=57defbd7f909a121b958ea4a536c7f81) [comments] (https://www.reddit.com/r/programming/comments/1maqe0l/autovacuum_tuning_stop_table_bloat_before_it_hurts/)
asyncio: a library with too many sharp corners
https://www.reddit.com/r/programming/comments/1maqxdp/asyncio_a_library_with_too_many_sharp_corners/
submitted by /u/pkkm (https://www.reddit.com/user/pkkm)
[link] (https://sailor.li/asyncio) [comments] (https://www.reddit.com/r/programming/comments/1maqxdp/asyncio_a_library_with_too_many_sharp_corners/)
https://www.reddit.com/r/programming/comments/1maqxdp/asyncio_a_library_with_too_many_sharp_corners/
submitted by /u/pkkm (https://www.reddit.com/user/pkkm)
[link] (https://sailor.li/asyncio) [comments] (https://www.reddit.com/r/programming/comments/1maqxdp/asyncio_a_library_with_too_many_sharp_corners/)
Learn SOLID principles: Single Responsibility Principle
https://www.reddit.com/r/programming/comments/1mas8pw/learn_solid_principles_single_responsibility/
<!-- SC_OFF -->Writing clean code is a must for any developer who wants their work to shine. It’s not just about getting your program to run; it’s about making code that’s easy to read, test, and update. One of the best ways to do this is by following the Single Responsibility Principle (SRP), the first of the SOLID principles. <!-- SC_ON --> submitted by /u/abhijith1203 (https://www.reddit.com/user/abhijith1203)
[link] (https://abhijithpurohit.medium.com/write-better-c-code-with-the-single-responsibility-principle-080c6d252964) [comments] (https://www.reddit.com/r/programming/comments/1mas8pw/learn_solid_principles_single_responsibility/)
https://www.reddit.com/r/programming/comments/1mas8pw/learn_solid_principles_single_responsibility/
<!-- SC_OFF -->Writing clean code is a must for any developer who wants their work to shine. It’s not just about getting your program to run; it’s about making code that’s easy to read, test, and update. One of the best ways to do this is by following the Single Responsibility Principle (SRP), the first of the SOLID principles. <!-- SC_ON --> submitted by /u/abhijith1203 (https://www.reddit.com/user/abhijith1203)
[link] (https://abhijithpurohit.medium.com/write-better-c-code-with-the-single-responsibility-principle-080c6d252964) [comments] (https://www.reddit.com/r/programming/comments/1mas8pw/learn_solid_principles_single_responsibility/)
How Spotify Saved $18M With Smart Compression (And Why Most Teams Get It Wrong)
https://www.reddit.com/r/programming/comments/1masbln/how_spotify_saved_18m_with_smart_compression_and/
<!-- SC_OFF -->TL;DR: Compression isn't just "make files smaller" - it's architectural strategy that can save millions or crash your site during Black Friday. The Eye-Opening Discovery: Spotify found that 40% of their bandwidth costs came from uncompressed metadata synchronization. Not the music files users actually wanted - the invisible data that keeps everything working. What Most Teams Do Wrong: Engineer: "Let's enable maximum compression on everything!" *Enables Brotli level 11 on all endpoints* *Black Friday traffic hits* *Site dies from CPU overload* *$2M in lost sales* This actually happened to an e-commerce company. Classic optimization-turned-incident. What The Giants Do Instead: Netflix's Multi-Layer Strategy: Video: H.264/H.265 (content-specific codecs) Metadata: Brotli (max compression for small data) APIs: ZSTD (balanced for real-time) Result: 40% bandwidth saved, zero performance impact Google's Context-Aware Approach: Search index: Custom algorithms achieving 8:1 ratios Live results: Hardware-accelerated gzip Memory cache: LZ4 for density without speed loss Handles 8.5 billion daily queries under 100ms Amazon's Intelligent Tiering: Hot data: Uncompressed (speed priority) Warm data: Standard compression (balanced) Cold data: Maximum compression (cost priority) Auto-migration based on access patterns The Framework That Actually Works: Start Conservative: ZSTD level 3 everywhere Measure Everything: CPU, memory, response times Adapt Conditions: High CPU → LZ4, Slow network → Brotli Layer Strategy: Different algorithms for CDN vs API vs Storage Key Insight That Changed My Thinking: Compression decisions should be made at the layer where you have the most context about data usage patterns. Mobile users might get aggressive compression to save bandwidth, desktop users get speed-optimized algorithms. Quick Wins You Can Implement Today: Enable gzip on web assets (1-day task, 20-30% immediate savings) Compress API responses over 1KB Use LZ4 for log shipping Don't compress already-compressed files (seems obvious but...) The Math That Matters: Good compression: Less data = Lower costs + Faster transfer + Better UX Bad compression: CPU overload = Slower responses + Higher costs + Incidents Questions for Discussion: What compression disasters have you seen in production? Anyone using adaptive compression based on system conditions? How do you monitor compression effectiveness in your stack? The difference between teams that save millions and teams that create incidents often comes down to treating compression as an architectural decision rather than a configuration flag. Source: This analysis comes from the systemdr newsletter where we break down distributed systems patterns from companies handling billions of requests. <!-- SC_ON --> submitted by /u/Extra_Ear_10 (https://www.reddit.com/user/Extra_Ear_10)
[link] (https://systemdr.substack.com/p/data-compression-techniques-for-scaling) [comments] (https://www.reddit.com/r/programming/comments/1masbln/how_spotify_saved_18m_with_smart_compression_and/)
https://www.reddit.com/r/programming/comments/1masbln/how_spotify_saved_18m_with_smart_compression_and/
<!-- SC_OFF -->TL;DR: Compression isn't just "make files smaller" - it's architectural strategy that can save millions or crash your site during Black Friday. The Eye-Opening Discovery: Spotify found that 40% of their bandwidth costs came from uncompressed metadata synchronization. Not the music files users actually wanted - the invisible data that keeps everything working. What Most Teams Do Wrong: Engineer: "Let's enable maximum compression on everything!" *Enables Brotli level 11 on all endpoints* *Black Friday traffic hits* *Site dies from CPU overload* *$2M in lost sales* This actually happened to an e-commerce company. Classic optimization-turned-incident. What The Giants Do Instead: Netflix's Multi-Layer Strategy: Video: H.264/H.265 (content-specific codecs) Metadata: Brotli (max compression for small data) APIs: ZSTD (balanced for real-time) Result: 40% bandwidth saved, zero performance impact Google's Context-Aware Approach: Search index: Custom algorithms achieving 8:1 ratios Live results: Hardware-accelerated gzip Memory cache: LZ4 for density without speed loss Handles 8.5 billion daily queries under 100ms Amazon's Intelligent Tiering: Hot data: Uncompressed (speed priority) Warm data: Standard compression (balanced) Cold data: Maximum compression (cost priority) Auto-migration based on access patterns The Framework That Actually Works: Start Conservative: ZSTD level 3 everywhere Measure Everything: CPU, memory, response times Adapt Conditions: High CPU → LZ4, Slow network → Brotli Layer Strategy: Different algorithms for CDN vs API vs Storage Key Insight That Changed My Thinking: Compression decisions should be made at the layer where you have the most context about data usage patterns. Mobile users might get aggressive compression to save bandwidth, desktop users get speed-optimized algorithms. Quick Wins You Can Implement Today: Enable gzip on web assets (1-day task, 20-30% immediate savings) Compress API responses over 1KB Use LZ4 for log shipping Don't compress already-compressed files (seems obvious but...) The Math That Matters: Good compression: Less data = Lower costs + Faster transfer + Better UX Bad compression: CPU overload = Slower responses + Higher costs + Incidents Questions for Discussion: What compression disasters have you seen in production? Anyone using adaptive compression based on system conditions? How do you monitor compression effectiveness in your stack? The difference between teams that save millions and teams that create incidents often comes down to treating compression as an architectural decision rather than a configuration flag. Source: This analysis comes from the systemdr newsletter where we break down distributed systems patterns from companies handling billions of requests. <!-- SC_ON --> submitted by /u/Extra_Ear_10 (https://www.reddit.com/user/Extra_Ear_10)
[link] (https://systemdr.substack.com/p/data-compression-techniques-for-scaling) [comments] (https://www.reddit.com/r/programming/comments/1masbln/how_spotify_saved_18m_with_smart_compression_and/)
Inheritance vs. Composition
https://www.reddit.com/r/programming/comments/1matz74/inheritance_vs_composition/
submitted by /u/bowbahdoe (https://www.reddit.com/user/bowbahdoe)
[link] (https://mccue.dev/pages/7-27-25-inheritance-vs-composition) [comments] (https://www.reddit.com/r/programming/comments/1matz74/inheritance_vs_composition/)
https://www.reddit.com/r/programming/comments/1matz74/inheritance_vs_composition/
submitted by /u/bowbahdoe (https://www.reddit.com/user/bowbahdoe)
[link] (https://mccue.dev/pages/7-27-25-inheritance-vs-composition) [comments] (https://www.reddit.com/r/programming/comments/1matz74/inheritance_vs_composition/)
Engineering With Java: Digest #57
https://www.reddit.com/r/programming/comments/1mavu0o/engineering_with_java_digest_57/
<!-- SC_OFF -->𝐓𝐡𝐞 𝐥𝐚𝐭𝐞𝐬𝐭 𝐞𝐝𝐢𝐭𝐢𝐨𝐧 𝐨𝐟 𝐭𝐡𝐞 𝐉𝐚𝐯𝐚 𝐧𝐞𝐰𝐬𝐥𝐞𝐭𝐭𝐞𝐫 𝐢𝐬 𝐨𝐮𝐭! 𝐓𝐡𝐢𝐬 𝐰𝐞𝐞𝐤'𝐬 𝐜𝐨𝐥𝐥𝐞𝐜𝐭𝐢𝐨𝐧 𝐢𝐧𝐜𝐥𝐮𝐝𝐞𝐬: > Self-Healing Microservices: Implementing Health Checks with Spring Boot and Kubernetes > JEP targeted to JDK 25: 520: JFR Method Timing & Tracing > Agent Memory with Spring AI & Redis > A Sneak Peek at the Stable Values API > Java 22 to 24: Level up your Java Code by embracing new features in a safe way > Spring Cloud Stream: Event-Driven Architecture – Part 1 > Undocumented Java 16 Feature: The End-of-File Comment > Service Mesh in Java: Istio and Linkerd Integration for Secure Microservices 𝐂𝐡𝐞𝐜𝐤 𝐨𝐮𝐭 𝐭𝐡𝐞 𝐧𝐞𝐰𝐬𝐥𝐞𝐭𝐭𝐞𝐫 𝐚𝐧𝐝 𝐬𝐮𝐛𝐬𝐜𝐫𝐢𝐛𝐞 𝐟𝐨𝐫 𝐰𝐞𝐞𝐤𝐥𝐲 𝐮𝐩𝐝𝐚𝐭𝐞𝐬: https://javabulletin.substack.com/p/engineering-with-java-digest-57 #java #spring #newsletter #springboot <!-- SC_ON --> submitted by /u/Educational-Ad2036 (https://www.reddit.com/user/Educational-Ad2036)
[link] (https://javabulletin.substack.com/p/engineering-with-java-digest-57) [comments] (https://www.reddit.com/r/programming/comments/1mavu0o/engineering_with_java_digest_57/)
https://www.reddit.com/r/programming/comments/1mavu0o/engineering_with_java_digest_57/
<!-- SC_OFF -->𝐓𝐡𝐞 𝐥𝐚𝐭𝐞𝐬𝐭 𝐞𝐝𝐢𝐭𝐢𝐨𝐧 𝐨𝐟 𝐭𝐡𝐞 𝐉𝐚𝐯𝐚 𝐧𝐞𝐰𝐬𝐥𝐞𝐭𝐭𝐞𝐫 𝐢𝐬 𝐨𝐮𝐭! 𝐓𝐡𝐢𝐬 𝐰𝐞𝐞𝐤'𝐬 𝐜𝐨𝐥𝐥𝐞𝐜𝐭𝐢𝐨𝐧 𝐢𝐧𝐜𝐥𝐮𝐝𝐞𝐬: > Self-Healing Microservices: Implementing Health Checks with Spring Boot and Kubernetes > JEP targeted to JDK 25: 520: JFR Method Timing & Tracing > Agent Memory with Spring AI & Redis > A Sneak Peek at the Stable Values API > Java 22 to 24: Level up your Java Code by embracing new features in a safe way > Spring Cloud Stream: Event-Driven Architecture – Part 1 > Undocumented Java 16 Feature: The End-of-File Comment > Service Mesh in Java: Istio and Linkerd Integration for Secure Microservices 𝐂𝐡𝐞𝐜𝐤 𝐨𝐮𝐭 𝐭𝐡𝐞 𝐧𝐞𝐰𝐬𝐥𝐞𝐭𝐭𝐞𝐫 𝐚𝐧𝐝 𝐬𝐮𝐛𝐬𝐜𝐫𝐢𝐛𝐞 𝐟𝐨𝐫 𝐰𝐞𝐞𝐤𝐥𝐲 𝐮𝐩𝐝𝐚𝐭𝐞𝐬: https://javabulletin.substack.com/p/engineering-with-java-digest-57 #java #spring #newsletter #springboot <!-- SC_ON --> submitted by /u/Educational-Ad2036 (https://www.reddit.com/user/Educational-Ad2036)
[link] (https://javabulletin.substack.com/p/engineering-with-java-digest-57) [comments] (https://www.reddit.com/r/programming/comments/1mavu0o/engineering_with_java_digest_57/)
Just completed the CS Girlies “AI vs H.I.” hackathon and this is what I want to tell my girlies
https://www.reddit.com/r/programming/comments/1mavvj1/just_completed_the_cs_girlies_ai_vs_hi_hackathon/
<!-- SC_OFF -->This month, I came across a post from CS Girlies, whom I genuinely idealize (following Michelle for an year). Just wrapped it Up and I must say, this experience boosted my confidence and programming skills both. Thanks to my amazing team for working so hard in this hackathon. What I want you to takeaway from this post: As a woman in CS, I’ve often felt like I needed to prove myself but no opportunity felt right to me or I was too hesitant maybe. But remember, that's not the case. I was afraid to take part in hackathons, though I have been making projects for a long time. Now when I saw a hackathon organized by girls, for the girls, I thought lets go! Turned out the best decision so far in my life. The mentors in discord and EVERYTHING was perfect. What we built:
My team (consisting of 5 girls) worked on a mood based arcade game. We made sure to make it US. Added everyone's ideas and It was cute, expressive, and totally “us,” with a definite girlie touch.! Why You should Try it: The hackathon is designed by girls, for girls, and welcomes all experience levels—no prior AI or hackathon background necessary. You should try it too. CS Girlies works incredibly hard to create spaces like this where girls can shine, learn, and build without needing prior experience. The tracks are beginner-friendly, creative, and emphasize emotion, intuition, and authenticity over optimization. <!-- SC_ON --> submitted by /u/Nervous_Lab_2401 (https://www.reddit.com/user/Nervous_Lab_2401)
[link] (https://www.csgirlies.com/hackathon) [comments] (https://www.reddit.com/r/programming/comments/1mavvj1/just_completed_the_cs_girlies_ai_vs_hi_hackathon/)
https://www.reddit.com/r/programming/comments/1mavvj1/just_completed_the_cs_girlies_ai_vs_hi_hackathon/
<!-- SC_OFF -->This month, I came across a post from CS Girlies, whom I genuinely idealize (following Michelle for an year). Just wrapped it Up and I must say, this experience boosted my confidence and programming skills both. Thanks to my amazing team for working so hard in this hackathon. What I want you to takeaway from this post: As a woman in CS, I’ve often felt like I needed to prove myself but no opportunity felt right to me or I was too hesitant maybe. But remember, that's not the case. I was afraid to take part in hackathons, though I have been making projects for a long time. Now when I saw a hackathon organized by girls, for the girls, I thought lets go! Turned out the best decision so far in my life. The mentors in discord and EVERYTHING was perfect. What we built:
My team (consisting of 5 girls) worked on a mood based arcade game. We made sure to make it US. Added everyone's ideas and It was cute, expressive, and totally “us,” with a definite girlie touch.! Why You should Try it: The hackathon is designed by girls, for girls, and welcomes all experience levels—no prior AI or hackathon background necessary. You should try it too. CS Girlies works incredibly hard to create spaces like this where girls can shine, learn, and build without needing prior experience. The tracks are beginner-friendly, creative, and emphasize emotion, intuition, and authenticity over optimization. <!-- SC_ON --> submitted by /u/Nervous_Lab_2401 (https://www.reddit.com/user/Nervous_Lab_2401)
[link] (https://www.csgirlies.com/hackathon) [comments] (https://www.reddit.com/r/programming/comments/1mavvj1/just_completed_the_cs_girlies_ai_vs_hi_hackathon/)
1 minute of Verlet Integration
https://www.reddit.com/r/programming/comments/1maw72t/1_minute_of_verlet_integration/
<!-- SC_OFF -->I've made a video recently on one of my favourite methods for solving Newton's equations. It is available on YouTube Shorts 🎥 It wasn't clear to me if this is worth a full article or just a short comment. Let me start with a supplementary material for the video first, and then we shall see... <!-- SC_ON --> submitted by /u/Inst2f (https://www.reddit.com/user/Inst2f)
[link] (https://wljs.io/blog/2025/07/27/verlet-supp/) [comments] (https://www.reddit.com/r/programming/comments/1maw72t/1_minute_of_verlet_integration/)
https://www.reddit.com/r/programming/comments/1maw72t/1_minute_of_verlet_integration/
<!-- SC_OFF -->I've made a video recently on one of my favourite methods for solving Newton's equations. It is available on YouTube Shorts 🎥 It wasn't clear to me if this is worth a full article or just a short comment. Let me start with a supplementary material for the video first, and then we shall see... <!-- SC_ON --> submitted by /u/Inst2f (https://www.reddit.com/user/Inst2f)
[link] (https://wljs.io/blog/2025/07/27/verlet-supp/) [comments] (https://www.reddit.com/r/programming/comments/1maw72t/1_minute_of_verlet_integration/)
Making Postgres 42,000x slower because I am unemployed
https://www.reddit.com/r/programming/comments/1maxelb/making_postgres_42000x_slower_because_i_am/
submitted by /u/AsyncBanana (https://www.reddit.com/user/AsyncBanana)
[link] (https://byteofdev.com/posts/making-postgres-slow/) [comments] (https://www.reddit.com/r/programming/comments/1maxelb/making_postgres_42000x_slower_because_i_am/)
https://www.reddit.com/r/programming/comments/1maxelb/making_postgres_42000x_slower_because_i_am/
submitted by /u/AsyncBanana (https://www.reddit.com/user/AsyncBanana)
[link] (https://byteofdev.com/posts/making-postgres-slow/) [comments] (https://www.reddit.com/r/programming/comments/1maxelb/making_postgres_42000x_slower_because_i_am/)
I used Qwen3-Coder to generate functional web apps from scratch
https://www.reddit.com/r/programming/comments/1may2tg/i_used_qwen3coder_to_generate_functional_web_apps/
submitted by /u/Few-Sorbet5722 (https://www.reddit.com/user/Few-Sorbet5722)
[link] (https://youtu.be/l65aOfy4NgQ) [comments] (https://www.reddit.com/r/programming/comments/1may2tg/i_used_qwen3coder_to_generate_functional_web_apps/)
https://www.reddit.com/r/programming/comments/1may2tg/i_used_qwen3coder_to_generate_functional_web_apps/
submitted by /u/Few-Sorbet5722 (https://www.reddit.com/user/Few-Sorbet5722)
[link] (https://youtu.be/l65aOfy4NgQ) [comments] (https://www.reddit.com/r/programming/comments/1may2tg/i_used_qwen3coder_to_generate_functional_web_apps/)
Reverse Proxy Deep Dive (Part 3): The Hidden Complexity of Service Discovery
https://www.reddit.com/r/programming/comments/1mb402l/reverse_proxy_deep_dive_part_3_the_hidden/
<!-- SC_OFF -->I’m sharing Part 3 of a series exploring the internals of reverse proxies at scale. This post dives into service discovery, a problem that sounds straightforward but reveals many hidden challenges in dynamic environments. Topics covered include: static host lists, DNS-based discovery with TTL tradeoffs, external systems like ZooKeeper and Envoy’s xDS, and active vs passive health checks. The post also discusses real-world problems like DNS size limits and health check storms. If you’ve worked on service discovery or proxy infrastructure, I’d love to hear your experiences or thoughts. Full post here (about 10 minutes): https://startwithawhy.com/reverseproxy/2025/07/26/Reverseproxy-Deep-Dive-Part3.html
Parts 1 and 2 cover connection management and HTTP parsing. <!-- SC_ON --> submitted by /u/MiggyIshu (https://www.reddit.com/user/MiggyIshu)
[link] (https://startwithawhy.com/reverseproxy/2025/07/26/Reverseproxy-Deep-Dive-Part3.html) [comments] (https://www.reddit.com/r/programming/comments/1mb402l/reverse_proxy_deep_dive_part_3_the_hidden/)
https://www.reddit.com/r/programming/comments/1mb402l/reverse_proxy_deep_dive_part_3_the_hidden/
<!-- SC_OFF -->I’m sharing Part 3 of a series exploring the internals of reverse proxies at scale. This post dives into service discovery, a problem that sounds straightforward but reveals many hidden challenges in dynamic environments. Topics covered include: static host lists, DNS-based discovery with TTL tradeoffs, external systems like ZooKeeper and Envoy’s xDS, and active vs passive health checks. The post also discusses real-world problems like DNS size limits and health check storms. If you’ve worked on service discovery or proxy infrastructure, I’d love to hear your experiences or thoughts. Full post here (about 10 minutes): https://startwithawhy.com/reverseproxy/2025/07/26/Reverseproxy-Deep-Dive-Part3.html
Parts 1 and 2 cover connection management and HTTP parsing. <!-- SC_ON --> submitted by /u/MiggyIshu (https://www.reddit.com/user/MiggyIshu)
[link] (https://startwithawhy.com/reverseproxy/2025/07/26/Reverseproxy-Deep-Dive-Part3.html) [comments] (https://www.reddit.com/r/programming/comments/1mb402l/reverse_proxy_deep_dive_part_3_the_hidden/)
Throttle Doctor: Interactive JS Event Handling
https://www.reddit.com/r/programming/comments/1mb641i/throttle_doctor_interactive_js_event_handling/
<!-- SC_OFF -->Hey r/javascript (https://www.reddit.com/r/javascript), I've built Throttle Doctor, an interactive app to help you visually understand and fine-tune event handling in JavaScript. If you've ever struggled with performance due to rapid-fire events (like mouse moves or scroll events), this tool is for you. What it does: It's a sandbox for experimenting with debounce and throttle techniques. You can adjust parameters like wait time, leading edge, and trailing edge to see their immediate impact on function execution, helping you optimize your code and prevent "event overload." Why it's useful: See it in action: Visualizes how debouncing and throttling control function calls. Learn by doing: Tweak settings and observe real-time results. Optimize performance: Understand how to prevent unnecessary executions. Try the live demo: https://duroktar.github.io/ThrottleDoctor/ Check out the code: https://github.com/Duroktar/ThrottleDoctor Note: This app showcases a throttleDebounce function, but a standalone library is not yet released. It's a proof-of-concept, and a library will be considered based on demand. Let me know your thoughts! Disclaimer: This post was created with AI assistance. The project was primarily vibe-coded, with minimal user tweaks <!-- SC_ON --> submitted by /u/Duroktar (https://www.reddit.com/user/Duroktar)
[link] (https://duroktar.github.io/ThrottleDoctor/) [comments] (https://www.reddit.com/r/programming/comments/1mb641i/throttle_doctor_interactive_js_event_handling/)
https://www.reddit.com/r/programming/comments/1mb641i/throttle_doctor_interactive_js_event_handling/
<!-- SC_OFF -->Hey r/javascript (https://www.reddit.com/r/javascript), I've built Throttle Doctor, an interactive app to help you visually understand and fine-tune event handling in JavaScript. If you've ever struggled with performance due to rapid-fire events (like mouse moves or scroll events), this tool is for you. What it does: It's a sandbox for experimenting with debounce and throttle techniques. You can adjust parameters like wait time, leading edge, and trailing edge to see their immediate impact on function execution, helping you optimize your code and prevent "event overload." Why it's useful: See it in action: Visualizes how debouncing and throttling control function calls. Learn by doing: Tweak settings and observe real-time results. Optimize performance: Understand how to prevent unnecessary executions. Try the live demo: https://duroktar.github.io/ThrottleDoctor/ Check out the code: https://github.com/Duroktar/ThrottleDoctor Note: This app showcases a throttleDebounce function, but a standalone library is not yet released. It's a proof-of-concept, and a library will be considered based on demand. Let me know your thoughts! Disclaimer: This post was created with AI assistance. The project was primarily vibe-coded, with minimal user tweaks <!-- SC_ON --> submitted by /u/Duroktar (https://www.reddit.com/user/Duroktar)
[link] (https://duroktar.github.io/ThrottleDoctor/) [comments] (https://www.reddit.com/r/programming/comments/1mb641i/throttle_doctor_interactive_js_event_handling/)
I fine-tuned an SLM -- here's what helped me get good results (and other learnings)
https://www.reddit.com/r/programming/comments/1mb7khe/i_finetuned_an_slm_heres_what_helped_me_get_good/
<!-- SC_OFF -->This weekend I fine-tuned the Qwen-3 0.6B model. I wanted a very lightweight model that can classify whether any user query going into my AI agents is a malicious prompt attack. I started by creating a dataset of 4000+ malicious queries using GPT-4o. I also added in a dataset of the same number of harmless queries. Attempt 1: Using this dataset, I ran SFT on the base version of the SLM on the queries. The resulting model was unusable, classifying every query as malicious. Attempt 2: I fine-tuned Qwen/Qwen3-0.6B instead, and this time spent more time prompt-tuning the instructions too. This gave me slightly improved accuracy but I noticed that it struggled at edge cases. eg, if a harmless prompt contains the term "System prompt", it gets flagged too. I realised I might need Chain of Thought to get there. I decided to start off by making the model start off with just one sentence of reasoning behind its prediction. Attempt 3: I created a new dataset, this time adding reasoning behind each malicious query. I fine-tuned the model on it again. It was an Aha! moment -- the model runs very accurately and I'm happy with the results. Planning to use this as a middleware between users and AI agents I build. <!-- SC_ON --> submitted by /u/sarthakai (https://www.reddit.com/user/sarthakai)
[link] (https://github.com/sarthakrastogi/rival) [comments] (https://www.reddit.com/r/programming/comments/1mb7khe/i_finetuned_an_slm_heres_what_helped_me_get_good/)
https://www.reddit.com/r/programming/comments/1mb7khe/i_finetuned_an_slm_heres_what_helped_me_get_good/
<!-- SC_OFF -->This weekend I fine-tuned the Qwen-3 0.6B model. I wanted a very lightweight model that can classify whether any user query going into my AI agents is a malicious prompt attack. I started by creating a dataset of 4000+ malicious queries using GPT-4o. I also added in a dataset of the same number of harmless queries. Attempt 1: Using this dataset, I ran SFT on the base version of the SLM on the queries. The resulting model was unusable, classifying every query as malicious. Attempt 2: I fine-tuned Qwen/Qwen3-0.6B instead, and this time spent more time prompt-tuning the instructions too. This gave me slightly improved accuracy but I noticed that it struggled at edge cases. eg, if a harmless prompt contains the term "System prompt", it gets flagged too. I realised I might need Chain of Thought to get there. I decided to start off by making the model start off with just one sentence of reasoning behind its prediction. Attempt 3: I created a new dataset, this time adding reasoning behind each malicious query. I fine-tuned the model on it again. It was an Aha! moment -- the model runs very accurately and I'm happy with the results. Planning to use this as a middleware between users and AI agents I build. <!-- SC_ON --> submitted by /u/sarthakai (https://www.reddit.com/user/sarthakai)
[link] (https://github.com/sarthakrastogi/rival) [comments] (https://www.reddit.com/r/programming/comments/1mb7khe/i_finetuned_an_slm_heres_what_helped_me_get_good/)
Scaling Node-RED for HTTP based flows
https://www.reddit.com/r/programming/comments/1mb7w3d/scaling_nodered_for_http_based_flows/
submitted by /u/Fried_Kachori (https://www.reddit.com/user/Fried_Kachori)
[link] (https://ahmadd.hashnode.dev/scaling-node-red-for-http-based-flows) [comments] (https://www.reddit.com/r/programming/comments/1mb7w3d/scaling_nodered_for_http_based_flows/)
https://www.reddit.com/r/programming/comments/1mb7w3d/scaling_nodered_for_http_based_flows/
submitted by /u/Fried_Kachori (https://www.reddit.com/user/Fried_Kachori)
[link] (https://ahmadd.hashnode.dev/scaling-node-red-for-http-based-flows) [comments] (https://www.reddit.com/r/programming/comments/1mb7w3d/scaling_nodered_for_http_based_flows/)
Learn System Design Fundamentals With Examples
https://www.reddit.com/r/programming/comments/1mb8ukk/learn_system_design_fundamentals_with_examples/
<!-- SC_OFF -->Learn System Design Fundamentals With Examples From CAP Theorem, Networking Basics, to Performance, Scalability, Availability, Security, Reliability etc. <!-- SC_ON --> submitted by /u/erdsingh24 (https://www.reddit.com/user/erdsingh24)
[link] (https://javatechonline.com/system-design-fundamentals/) [comments] (https://www.reddit.com/r/programming/comments/1mb8ukk/learn_system_design_fundamentals_with_examples/)
https://www.reddit.com/r/programming/comments/1mb8ukk/learn_system_design_fundamentals_with_examples/
<!-- SC_OFF -->Learn System Design Fundamentals With Examples From CAP Theorem, Networking Basics, to Performance, Scalability, Availability, Security, Reliability etc. <!-- SC_ON --> submitted by /u/erdsingh24 (https://www.reddit.com/user/erdsingh24)
[link] (https://javatechonline.com/system-design-fundamentals/) [comments] (https://www.reddit.com/r/programming/comments/1mb8ukk/learn_system_design_fundamentals_with_examples/)
Yet another dev thinking he's a cybersecurity expert 💀
https://www.reddit.com/r/programming/comments/1mb9beb/yet_another_dev_thinking_hes_a_cybersecurity/
<!-- SC_OFF -->So I decided to make an "antivirus" for Node.js. It checks uploaded files, marks them as clean / suspicious / malicious, and even lets you plug in YARA rules. Basically: "Yo bro, your ZIP file smells like malware, I ain't saving that." Is this useful, funny, or just plain cringe? I can’t tell anymore. <!-- SC_ON --> submitted by /u/Extension-Count-2412 (https://www.reddit.com/user/Extension-Count-2412)
[link] (https://www.npmjs.com/package/pompelmi?activeTab=readme) [comments] (https://www.reddit.com/r/programming/comments/1mb9beb/yet_another_dev_thinking_hes_a_cybersecurity/)
https://www.reddit.com/r/programming/comments/1mb9beb/yet_another_dev_thinking_hes_a_cybersecurity/
<!-- SC_OFF -->So I decided to make an "antivirus" for Node.js. It checks uploaded files, marks them as clean / suspicious / malicious, and even lets you plug in YARA rules. Basically: "Yo bro, your ZIP file smells like malware, I ain't saving that." Is this useful, funny, or just plain cringe? I can’t tell anymore. <!-- SC_ON --> submitted by /u/Extension-Count-2412 (https://www.reddit.com/user/Extension-Count-2412)
[link] (https://www.npmjs.com/package/pompelmi?activeTab=readme) [comments] (https://www.reddit.com/r/programming/comments/1mb9beb/yet_another_dev_thinking_hes_a_cybersecurity/)
Dynamic Phase Alignment in Audio – Sander J. Skjegstad – BSC 2025
https://www.reddit.com/r/programming/comments/1mbawfh/dynamic_phase_alignment_in_audio_sander_j/
submitted by /u/gingerbill (https://www.reddit.com/user/gingerbill)
[link] (https://www.youtube.com/watch?v=JNCVj_RtdZw) [comments] (https://www.reddit.com/r/programming/comments/1mbawfh/dynamic_phase_alignment_in_audio_sander_j/)
https://www.reddit.com/r/programming/comments/1mbawfh/dynamic_phase_alignment_in_audio_sander_j/
submitted by /u/gingerbill (https://www.reddit.com/user/gingerbill)
[link] (https://www.youtube.com/watch?v=JNCVj_RtdZw) [comments] (https://www.reddit.com/r/programming/comments/1mbawfh/dynamic_phase_alignment_in_audio_sander_j/)
Here comes the sun
https://www.reddit.com/r/programming/comments/1mbbolm/here_comes_the_sun/
<!-- SC_OFF -->“Write crates, not programs” is a mantra my students are probably tired of hearing, but it's something I think many programmers would do well to bear in mind. Instead of being a Colonial gunsmith, in Scott Rosenberg's analogy, hand-crafting every nut and screw, we should instead think about how to contribute trusted, stable components to a global repository of robust software: the universal library of Rust. I have a fairly well-defined process for going about this. Here it is. <!-- SC_ON --> submitted by /u/AlexandraLinnea (https://www.reddit.com/user/AlexandraLinnea)
[link] (https://bitfieldconsulting.com/posts/here-comes-sun) [comments] (https://www.reddit.com/r/programming/comments/1mbbolm/here_comes_the_sun/)
https://www.reddit.com/r/programming/comments/1mbbolm/here_comes_the_sun/
<!-- SC_OFF -->“Write crates, not programs” is a mantra my students are probably tired of hearing, but it's something I think many programmers would do well to bear in mind. Instead of being a Colonial gunsmith, in Scott Rosenberg's analogy, hand-crafting every nut and screw, we should instead think about how to contribute trusted, stable components to a global repository of robust software: the universal library of Rust. I have a fairly well-defined process for going about this. Here it is. <!-- SC_ON --> submitted by /u/AlexandraLinnea (https://www.reddit.com/user/AlexandraLinnea)
[link] (https://bitfieldconsulting.com/posts/here-comes-sun) [comments] (https://www.reddit.com/r/programming/comments/1mbbolm/here_comes_the_sun/)
Nadia Odunayo & Scaling Rails for Millions of Users as a Solo Dev - On Rails
https://www.reddit.com/r/programming/comments/1mbjp3a/nadia_odunayo_scaling_rails_for_millions_of_users/
submitted by /u/robbyrussell (https://www.reddit.com/user/robbyrussell)
[link] (https://onrails.buzzsprout.com/2462975/episodes/17575580-nadia-odunayo-scaling-rails-for-millions-of-users-as-a-solo-dev) [comments] (https://www.reddit.com/r/programming/comments/1mbjp3a/nadia_odunayo_scaling_rails_for_millions_of_users/)
https://www.reddit.com/r/programming/comments/1mbjp3a/nadia_odunayo_scaling_rails_for_millions_of_users/
submitted by /u/robbyrussell (https://www.reddit.com/user/robbyrussell)
[link] (https://onrails.buzzsprout.com/2462975/episodes/17575580-nadia-odunayo-scaling-rails-for-millions-of-users-as-a-solo-dev) [comments] (https://www.reddit.com/r/programming/comments/1mbjp3a/nadia_odunayo_scaling_rails_for_millions_of_users/)
Socat – A utility for data transfer between two addresses
https://www.reddit.com/r/programming/comments/1mbjpy7/socat_a_utility_for_data_transfer_between_two/
submitted by /u/BrewedDoritos (https://www.reddit.com/user/BrewedDoritos)
[link] (https://copyconstruct.medium.com/socat-29453e9fc8a6) [comments] (https://www.reddit.com/r/programming/comments/1mbjpy7/socat_a_utility_for_data_transfer_between_two/)
https://www.reddit.com/r/programming/comments/1mbjpy7/socat_a_utility_for_data_transfer_between_two/
submitted by /u/BrewedDoritos (https://www.reddit.com/user/BrewedDoritos)
[link] (https://copyconstruct.medium.com/socat-29453e9fc8a6) [comments] (https://www.reddit.com/r/programming/comments/1mbjpy7/socat_a_utility_for_data_transfer_between_two/)
Exception.add_note
https://www.reddit.com/r/programming/comments/1mbkhzo/exceptionadd_note/
submitted by /u/ketralnis (https://www.reddit.com/user/ketralnis)
[link] (https://daniel.feldroy.com/posts/til-2025-05-exception-add_note) [comments] (https://www.reddit.com/r/programming/comments/1mbkhzo/exceptionadd_note/)
https://www.reddit.com/r/programming/comments/1mbkhzo/exceptionadd_note/
submitted by /u/ketralnis (https://www.reddit.com/user/ketralnis)
[link] (https://daniel.feldroy.com/posts/til-2025-05-exception-add_note) [comments] (https://www.reddit.com/r/programming/comments/1mbkhzo/exceptionadd_note/)