Help ๐ญ
https://www.reddit.com/r/programming/comments/1mahze6/help/
<!-- SC_OFF -->Please someone help with an alternative. <!-- SC_ON --> submitted by /u/Wild_Peace6443 (https://www.reddit.com/user/Wild_Peace6443)
[link] (https://www.reddit.com/r/Btechtards/comments/1mahy76/please_someone_help/?utm_source=share&utm_medium=mweb3x&utm_name=mweb3xcss&utm_term=1&utm_content=share_button) [comments] (https://www.reddit.com/r/programming/comments/1mahze6/help/)
https://www.reddit.com/r/programming/comments/1mahze6/help/
<!-- SC_OFF -->Please someone help with an alternative. <!-- SC_ON --> submitted by /u/Wild_Peace6443 (https://www.reddit.com/user/Wild_Peace6443)
[link] (https://www.reddit.com/r/Btechtards/comments/1mahy76/please_someone_help/?utm_source=share&utm_medium=mweb3x&utm_name=mweb3xcss&utm_term=1&utm_content=share_button) [comments] (https://www.reddit.com/r/programming/comments/1mahze6/help/)
Ivory: Streamlining PostgreSQL Cluster Management for Devs and DBAs
https://www.reddit.com/r/programming/comments/1makioc/ivory_streamlining_postgresql_cluster_management/
<!-- SC_OFF -->Ivory: Streamlining PostgreSQL Cluster Management for Devs and DBAs If you're managing PostgreSQL clusters, especially with Patroni for high-availability (HA), you know the pain of juggling complex CLI commands and APIs. Enter Ivory, an open-source PostgreSQL management tool designed to simplify and visualize cluster management. Here's a quick dive into why Ivory might be your next go-to for PostgreSQL administration, perfect for sharing with the Reddit community! What is Ivory? Ivory is a user-friendly, open-source tool built to make managing PostgreSQL clustersโparticularly those using Patroniโmore intuitive. It provides a centralized interface to monitor, troubleshoot, and optimize your PostgreSQL HA setups, saving you from endless command-line gymnastics. Whether you're a developer or a DBA, Ivory aims to streamline your workflow with a focus on usability and security. Note: Donโt confuse Ivory with IvorySQL, a different project focused on Oracle-compatible PostgreSQL. This article is all about the management tool! Key Features That Shine Patroni Management Made Easy Ivory wraps Patroniโs complex CLI and API into a clean UI. Need to perform a switchover, failover, restart, or reinitialization? Itโs just a few clicks away. You get a dashboard showing all your Patroni clusters, their statuses, and any warnings, with tagging support to keep things organized. Query Builder for Quick Troubleshooting Tired of writing repetitive SQL queries? Ivoryโs query builder simplifies running specific PostgreSQL queries for troubleshooting and maintenance, saving time and reducing errors. Multi-Cluster Management Manage multiple PostgreSQL clusters across different locations from one interface. No more copy-pasting commands between clustersโIvory handles it all in one place. Security First Authentication: Optional Basic authentication (username/password) for VM deployments, with LDAP/SSO support planned. Mutual TLS: Ivory supports secure PostgreSQL connections with mutual TLS (set your PostgreSQL user to verify-ca mode). Certificate Management: Add and reuse certificates for Patroni, making secure requests a breeze. Bloat Cleanup Ivory integrates with pgcompacttable to tackle table bloat, helping keep your database performance in check. Metrics and Dashboards Get simple charts for instance metrics, with future plans to integrate with Grafana for advanced dashboarding. Itโs a great way to keep an eye on your clustersโ health. Flexible Deployment Run Ivory locally on your machine or deploy it on a VM for team collaboration. It supports Docker with environment variables like IVORY_URL_PATH for reverse proxies and IVORY_CERT_FILE_PATH for TLS certificates (auto-switches to port 443 when configured). Why Youโll Love It Saves Time: No more digging through Patroni docs or memorizing commands. Ivoryโs UI makes cluster management fast and intuitive. Centralized Control: Monitor and manage all your clusters from one place, even across different environments. Community-Driven: As an open-source project, Ivory welcomes contributions. Got an idea for a new feature, like support for other failover tools? Jump into the discussion on GitHub! Getting Started Ivory is easy to set up via Docker. Check the GitHub repo (https://github.com/veegres/ivory) for installation instructions. Be mindful that major/minor releases may not be backward-compatible, so install from scratch for big updates. Patch releases are safer, focusing on bug fixes and minor tweaks. For secure setups, configure TLS certificates and environment variables as needed. If youโre running locally, you can skip authentication for simplicity. Whatโs Next for Ivory? The roadmap includes: PostgreSQL TLS connection support. Integration with other failover tools (based on community demand). Import/export
https://www.reddit.com/r/programming/comments/1makioc/ivory_streamlining_postgresql_cluster_management/
<!-- SC_OFF -->Ivory: Streamlining PostgreSQL Cluster Management for Devs and DBAs If you're managing PostgreSQL clusters, especially with Patroni for high-availability (HA), you know the pain of juggling complex CLI commands and APIs. Enter Ivory, an open-source PostgreSQL management tool designed to simplify and visualize cluster management. Here's a quick dive into why Ivory might be your next go-to for PostgreSQL administration, perfect for sharing with the Reddit community! What is Ivory? Ivory is a user-friendly, open-source tool built to make managing PostgreSQL clustersโparticularly those using Patroniโmore intuitive. It provides a centralized interface to monitor, troubleshoot, and optimize your PostgreSQL HA setups, saving you from endless command-line gymnastics. Whether you're a developer or a DBA, Ivory aims to streamline your workflow with a focus on usability and security. Note: Donโt confuse Ivory with IvorySQL, a different project focused on Oracle-compatible PostgreSQL. This article is all about the management tool! Key Features That Shine Patroni Management Made Easy Ivory wraps Patroniโs complex CLI and API into a clean UI. Need to perform a switchover, failover, restart, or reinitialization? Itโs just a few clicks away. You get a dashboard showing all your Patroni clusters, their statuses, and any warnings, with tagging support to keep things organized. Query Builder for Quick Troubleshooting Tired of writing repetitive SQL queries? Ivoryโs query builder simplifies running specific PostgreSQL queries for troubleshooting and maintenance, saving time and reducing errors. Multi-Cluster Management Manage multiple PostgreSQL clusters across different locations from one interface. No more copy-pasting commands between clustersโIvory handles it all in one place. Security First Authentication: Optional Basic authentication (username/password) for VM deployments, with LDAP/SSO support planned. Mutual TLS: Ivory supports secure PostgreSQL connections with mutual TLS (set your PostgreSQL user to verify-ca mode). Certificate Management: Add and reuse certificates for Patroni, making secure requests a breeze. Bloat Cleanup Ivory integrates with pgcompacttable to tackle table bloat, helping keep your database performance in check. Metrics and Dashboards Get simple charts for instance metrics, with future plans to integrate with Grafana for advanced dashboarding. Itโs a great way to keep an eye on your clustersโ health. Flexible Deployment Run Ivory locally on your machine or deploy it on a VM for team collaboration. It supports Docker with environment variables like IVORY_URL_PATH for reverse proxies and IVORY_CERT_FILE_PATH for TLS certificates (auto-switches to port 443 when configured). Why Youโll Love It Saves Time: No more digging through Patroni docs or memorizing commands. Ivoryโs UI makes cluster management fast and intuitive. Centralized Control: Monitor and manage all your clusters from one place, even across different environments. Community-Driven: As an open-source project, Ivory welcomes contributions. Got an idea for a new feature, like support for other failover tools? Jump into the discussion on GitHub! Getting Started Ivory is easy to set up via Docker. Check the GitHub repo (https://github.com/veegres/ivory) for installation instructions. Be mindful that major/minor releases may not be backward-compatible, so install from scratch for big updates. Patch releases are safer, focusing on bug fixes and minor tweaks. For secure setups, configure TLS certificates and environment variables as needed. If youโre running locally, you can skip authentication for simplicity. Whatโs Next for Ivory? The roadmap includes: PostgreSQL TLS connection support. Integration with other failover tools (based on community demand). Import/export
functionality for smoother upgrades. Grafana integration for richer metrics. Join the Conversation Ivory is a game-changer for PostgreSQL HA management, but itโs still evolving. Have you tried it? Got tips, tricks, or feature requests? Share your thoughts in the comments! If youโre curious about specific use cases or need help with setup, check out Andrei Sergeevโs Medium posts (https://anselvo.medium.com/) or the GitHub repo for more details. Letโs talk about how Ivoryโs making your PostgreSQL life easierโor what youโd love to see added to it! ๐ <!-- SC_ON --> submitted by /u/aelsergeev (https://www.reddit.com/user/aelsergeev)
[link] (https://github.com/veegres/ivory) [comments] (https://www.reddit.com/r/programming/comments/1makioc/ivory_streamlining_postgresql_cluster_management/)
[link] (https://github.com/veegres/ivory) [comments] (https://www.reddit.com/r/programming/comments/1makioc/ivory_streamlining_postgresql_cluster_management/)
HDR & Bloom / Post-Processing tech demonstration on real Nintendo 64
https://www.reddit.com/r/programming/comments/1makmqs/hdr_bloom_postprocessing_tech_demonstration_on/
submitted by /u/r_retrohacking_mod2 (https://www.reddit.com/user/r_retrohacking_mod2)
[link] (https://m.youtube.com/watch?v=XP8g2ngHftY) [comments] (https://www.reddit.com/r/programming/comments/1makmqs/hdr_bloom_postprocessing_tech_demonstration_on/)
https://www.reddit.com/r/programming/comments/1makmqs/hdr_bloom_postprocessing_tech_demonstration_on/
submitted by /u/r_retrohacking_mod2 (https://www.reddit.com/user/r_retrohacking_mod2)
[link] (https://m.youtube.com/watch?v=XP8g2ngHftY) [comments] (https://www.reddit.com/r/programming/comments/1makmqs/hdr_bloom_postprocessing_tech_demonstration_on/)
Become an Engineering Leader Everyone Wants to Work With
https://www.reddit.com/r/programming/comments/1mapgly/become_an_engineering_leader_everyone_wants_to/
submitted by /u/gregorojstersek (https://www.reddit.com/user/gregorojstersek)
[link] (https://www.youtube.com/watch?v=58TAuoEFC7g) [comments] (https://www.reddit.com/r/programming/comments/1mapgly/become_an_engineering_leader_everyone_wants_to/)
https://www.reddit.com/r/programming/comments/1mapgly/become_an_engineering_leader_everyone_wants_to/
submitted by /u/gregorojstersek (https://www.reddit.com/user/gregorojstersek)
[link] (https://www.youtube.com/watch?v=58TAuoEFC7g) [comments] (https://www.reddit.com/r/programming/comments/1mapgly/become_an_engineering_leader_everyone_wants_to/)
How to Make AI Agents Collaborate with ACP (Agent Communication Protocol)
https://www.reddit.com/r/programming/comments/1maphoz/how_to_make_ai_agents_collaborate_with_acp_agent/
submitted by /u/Flashy-Thought-5472 (https://www.reddit.com/user/Flashy-Thought-5472)
[link] (https://www.youtube.com/watch?v=fABcNHKVqYM&list=PLp01ObP3udmq2quR-RfrX4zNut_t_kNot&index=24) [comments] (https://www.reddit.com/r/programming/comments/1maphoz/how_to_make_ai_agents_collaborate_with_acp_agent/)
https://www.reddit.com/r/programming/comments/1maphoz/how_to_make_ai_agents_collaborate_with_acp_agent/
submitted by /u/Flashy-Thought-5472 (https://www.reddit.com/user/Flashy-Thought-5472)
[link] (https://www.youtube.com/watch?v=fABcNHKVqYM&list=PLp01ObP3udmq2quR-RfrX4zNut_t_kNot&index=24) [comments] (https://www.reddit.com/r/programming/comments/1maphoz/how_to_make_ai_agents_collaborate_with_acp_agent/)
Autovacuum Tuning: Stop Table Bloat Before It Hurts
https://www.reddit.com/r/programming/comments/1maqe0l/autovacuum_tuning_stop_table_bloat_before_it_hurts/
<!-- SC_OFF -->https://medium.com/@rohansodha10/autovacuum-tuning-stop-table-bloat-before-it-hurts-0e39510d0804?sk=57defbd7f909a121b958ea4a536c7f81 <!-- SC_ON --> submitted by /u/Temporary_Depth_2491 (https://www.reddit.com/user/Temporary_Depth_2491)
[link] (https://medium.com/@rohansodha10/autovacuum-tuning-stop-table-bloat-before-it-hurts-0e39510d0804?sk=57defbd7f909a121b958ea4a536c7f81) [comments] (https://www.reddit.com/r/programming/comments/1maqe0l/autovacuum_tuning_stop_table_bloat_before_it_hurts/)
https://www.reddit.com/r/programming/comments/1maqe0l/autovacuum_tuning_stop_table_bloat_before_it_hurts/
<!-- SC_OFF -->https://medium.com/@rohansodha10/autovacuum-tuning-stop-table-bloat-before-it-hurts-0e39510d0804?sk=57defbd7f909a121b958ea4a536c7f81 <!-- SC_ON --> submitted by /u/Temporary_Depth_2491 (https://www.reddit.com/user/Temporary_Depth_2491)
[link] (https://medium.com/@rohansodha10/autovacuum-tuning-stop-table-bloat-before-it-hurts-0e39510d0804?sk=57defbd7f909a121b958ea4a536c7f81) [comments] (https://www.reddit.com/r/programming/comments/1maqe0l/autovacuum_tuning_stop_table_bloat_before_it_hurts/)
asyncio: a library with too many sharp corners
https://www.reddit.com/r/programming/comments/1maqxdp/asyncio_a_library_with_too_many_sharp_corners/
submitted by /u/pkkm (https://www.reddit.com/user/pkkm)
[link] (https://sailor.li/asyncio) [comments] (https://www.reddit.com/r/programming/comments/1maqxdp/asyncio_a_library_with_too_many_sharp_corners/)
https://www.reddit.com/r/programming/comments/1maqxdp/asyncio_a_library_with_too_many_sharp_corners/
submitted by /u/pkkm (https://www.reddit.com/user/pkkm)
[link] (https://sailor.li/asyncio) [comments] (https://www.reddit.com/r/programming/comments/1maqxdp/asyncio_a_library_with_too_many_sharp_corners/)
Learn SOLID principles: Single Responsibility Principle
https://www.reddit.com/r/programming/comments/1mas8pw/learn_solid_principles_single_responsibility/
<!-- SC_OFF -->Writing clean code is a must for any developer who wants their work to shine. Itโs not just about getting your program to run; itโs about making code thatโs easy to read, test, and update. One of the best ways to do this is by following the Single Responsibility Principle (SRP), the first of the SOLID principles. <!-- SC_ON --> submitted by /u/abhijith1203 (https://www.reddit.com/user/abhijith1203)
[link] (https://abhijithpurohit.medium.com/write-better-c-code-with-the-single-responsibility-principle-080c6d252964) [comments] (https://www.reddit.com/r/programming/comments/1mas8pw/learn_solid_principles_single_responsibility/)
https://www.reddit.com/r/programming/comments/1mas8pw/learn_solid_principles_single_responsibility/
<!-- SC_OFF -->Writing clean code is a must for any developer who wants their work to shine. Itโs not just about getting your program to run; itโs about making code thatโs easy to read, test, and update. One of the best ways to do this is by following the Single Responsibility Principle (SRP), the first of the SOLID principles. <!-- SC_ON --> submitted by /u/abhijith1203 (https://www.reddit.com/user/abhijith1203)
[link] (https://abhijithpurohit.medium.com/write-better-c-code-with-the-single-responsibility-principle-080c6d252964) [comments] (https://www.reddit.com/r/programming/comments/1mas8pw/learn_solid_principles_single_responsibility/)
How Spotify Saved $18M With Smart Compression (And Why Most Teams Get It Wrong)
https://www.reddit.com/r/programming/comments/1masbln/how_spotify_saved_18m_with_smart_compression_and/
<!-- SC_OFF -->TL;DR: Compression isn't just "make files smaller" - it's architectural strategy that can save millions or crash your site during Black Friday. The Eye-Opening Discovery: Spotify found that 40% of their bandwidth costs came from uncompressed metadata synchronization. Not the music files users actually wanted - the invisible data that keeps everything working. What Most Teams Do Wrong: Engineer: "Let's enable maximum compression on everything!" *Enables Brotli level 11 on all endpoints* *Black Friday traffic hits* *Site dies from CPU overload* *$2M in lost sales* This actually happened to an e-commerce company. Classic optimization-turned-incident. What The Giants Do Instead: Netflix's Multi-Layer Strategy: Video: H.264/H.265 (content-specific codecs) Metadata: Brotli (max compression for small data) APIs: ZSTD (balanced for real-time) Result: 40% bandwidth saved, zero performance impact Google's Context-Aware Approach: Search index: Custom algorithms achieving 8:1 ratios Live results: Hardware-accelerated gzip Memory cache: LZ4 for density without speed loss Handles 8.5 billion daily queries under 100ms Amazon's Intelligent Tiering: Hot data: Uncompressed (speed priority) Warm data: Standard compression (balanced) Cold data: Maximum compression (cost priority) Auto-migration based on access patterns The Framework That Actually Works: Start Conservative: ZSTD level 3 everywhere Measure Everything: CPU, memory, response times Adapt Conditions: High CPU โ LZ4, Slow network โ Brotli Layer Strategy: Different algorithms for CDN vs API vs Storage Key Insight That Changed My Thinking: Compression decisions should be made at the layer where you have the most context about data usage patterns. Mobile users might get aggressive compression to save bandwidth, desktop users get speed-optimized algorithms. Quick Wins You Can Implement Today: Enable gzip on web assets (1-day task, 20-30% immediate savings) Compress API responses over 1KB Use LZ4 for log shipping Don't compress already-compressed files (seems obvious but...) The Math That Matters: Good compression: Less data = Lower costs + Faster transfer + Better UX Bad compression: CPU overload = Slower responses + Higher costs + Incidents Questions for Discussion: What compression disasters have you seen in production? Anyone using adaptive compression based on system conditions? How do you monitor compression effectiveness in your stack? The difference between teams that save millions and teams that create incidents often comes down to treating compression as an architectural decision rather than a configuration flag. Source: This analysis comes from the systemdr newsletter where we break down distributed systems patterns from companies handling billions of requests. <!-- SC_ON --> submitted by /u/Extra_Ear_10 (https://www.reddit.com/user/Extra_Ear_10)
[link] (https://systemdr.substack.com/p/data-compression-techniques-for-scaling) [comments] (https://www.reddit.com/r/programming/comments/1masbln/how_spotify_saved_18m_with_smart_compression_and/)
https://www.reddit.com/r/programming/comments/1masbln/how_spotify_saved_18m_with_smart_compression_and/
<!-- SC_OFF -->TL;DR: Compression isn't just "make files smaller" - it's architectural strategy that can save millions or crash your site during Black Friday. The Eye-Opening Discovery: Spotify found that 40% of their bandwidth costs came from uncompressed metadata synchronization. Not the music files users actually wanted - the invisible data that keeps everything working. What Most Teams Do Wrong: Engineer: "Let's enable maximum compression on everything!" *Enables Brotli level 11 on all endpoints* *Black Friday traffic hits* *Site dies from CPU overload* *$2M in lost sales* This actually happened to an e-commerce company. Classic optimization-turned-incident. What The Giants Do Instead: Netflix's Multi-Layer Strategy: Video: H.264/H.265 (content-specific codecs) Metadata: Brotli (max compression for small data) APIs: ZSTD (balanced for real-time) Result: 40% bandwidth saved, zero performance impact Google's Context-Aware Approach: Search index: Custom algorithms achieving 8:1 ratios Live results: Hardware-accelerated gzip Memory cache: LZ4 for density without speed loss Handles 8.5 billion daily queries under 100ms Amazon's Intelligent Tiering: Hot data: Uncompressed (speed priority) Warm data: Standard compression (balanced) Cold data: Maximum compression (cost priority) Auto-migration based on access patterns The Framework That Actually Works: Start Conservative: ZSTD level 3 everywhere Measure Everything: CPU, memory, response times Adapt Conditions: High CPU โ LZ4, Slow network โ Brotli Layer Strategy: Different algorithms for CDN vs API vs Storage Key Insight That Changed My Thinking: Compression decisions should be made at the layer where you have the most context about data usage patterns. Mobile users might get aggressive compression to save bandwidth, desktop users get speed-optimized algorithms. Quick Wins You Can Implement Today: Enable gzip on web assets (1-day task, 20-30% immediate savings) Compress API responses over 1KB Use LZ4 for log shipping Don't compress already-compressed files (seems obvious but...) The Math That Matters: Good compression: Less data = Lower costs + Faster transfer + Better UX Bad compression: CPU overload = Slower responses + Higher costs + Incidents Questions for Discussion: What compression disasters have you seen in production? Anyone using adaptive compression based on system conditions? How do you monitor compression effectiveness in your stack? The difference between teams that save millions and teams that create incidents often comes down to treating compression as an architectural decision rather than a configuration flag. Source: This analysis comes from the systemdr newsletter where we break down distributed systems patterns from companies handling billions of requests. <!-- SC_ON --> submitted by /u/Extra_Ear_10 (https://www.reddit.com/user/Extra_Ear_10)
[link] (https://systemdr.substack.com/p/data-compression-techniques-for-scaling) [comments] (https://www.reddit.com/r/programming/comments/1masbln/how_spotify_saved_18m_with_smart_compression_and/)
Inheritance vs. Composition
https://www.reddit.com/r/programming/comments/1matz74/inheritance_vs_composition/
submitted by /u/bowbahdoe (https://www.reddit.com/user/bowbahdoe)
[link] (https://mccue.dev/pages/7-27-25-inheritance-vs-composition) [comments] (https://www.reddit.com/r/programming/comments/1matz74/inheritance_vs_composition/)
https://www.reddit.com/r/programming/comments/1matz74/inheritance_vs_composition/
submitted by /u/bowbahdoe (https://www.reddit.com/user/bowbahdoe)
[link] (https://mccue.dev/pages/7-27-25-inheritance-vs-composition) [comments] (https://www.reddit.com/r/programming/comments/1matz74/inheritance_vs_composition/)
Engineering With Java: Digest #57
https://www.reddit.com/r/programming/comments/1mavu0o/engineering_with_java_digest_57/
<!-- SC_OFF -->๐๐ก๐ ๐ฅ๐๐ญ๐๐ฌ๐ญ ๐๐๐ข๐ญ๐ข๐จ๐ง ๐จ๐ ๐ญ๐ก๐ ๐๐๐ฏ๐ ๐ง๐๐ฐ๐ฌ๐ฅ๐๐ญ๐ญ๐๐ซ ๐ข๐ฌ ๐จ๐ฎ๐ญ! ๐๐ก๐ข๐ฌ ๐ฐ๐๐๐ค'๐ฌ ๐๐จ๐ฅ๐ฅ๐๐๐ญ๐ข๐จ๐ง ๐ข๐ง๐๐ฅ๐ฎ๐๐๐ฌ: > Self-Healing Microservices: Implementing Health Checks with Spring Boot and Kubernetes > JEP targeted to JDK 25: 520: JFR Method Timing & Tracing > Agent Memory with Spring AI & Redis > A Sneak Peek at the Stable Values API > Java 22 to 24: Level up your Java Code by embracing new features in a safe way > Spring Cloud Stream: Event-Driven Architecture โ Part 1 > Undocumented Java 16 Feature: The End-of-File Comment > Service Mesh in Java: Istio and Linkerd Integration for Secure Microservices ๐๐ก๐๐๐ค ๐จ๐ฎ๐ญ ๐ญ๐ก๐ ๐ง๐๐ฐ๐ฌ๐ฅ๐๐ญ๐ญ๐๐ซ ๐๐ง๐ ๐ฌ๐ฎ๐๐ฌ๐๐ซ๐ข๐๐ ๐๐จ๐ซ ๐ฐ๐๐๐ค๐ฅ๐ฒ ๐ฎ๐ฉ๐๐๐ญ๐๐ฌ: https://javabulletin.substack.com/p/engineering-with-java-digest-57 #java #spring #newsletter #springboot <!-- SC_ON --> submitted by /u/Educational-Ad2036 (https://www.reddit.com/user/Educational-Ad2036)
[link] (https://javabulletin.substack.com/p/engineering-with-java-digest-57) [comments] (https://www.reddit.com/r/programming/comments/1mavu0o/engineering_with_java_digest_57/)
https://www.reddit.com/r/programming/comments/1mavu0o/engineering_with_java_digest_57/
<!-- SC_OFF -->๐๐ก๐ ๐ฅ๐๐ญ๐๐ฌ๐ญ ๐๐๐ข๐ญ๐ข๐จ๐ง ๐จ๐ ๐ญ๐ก๐ ๐๐๐ฏ๐ ๐ง๐๐ฐ๐ฌ๐ฅ๐๐ญ๐ญ๐๐ซ ๐ข๐ฌ ๐จ๐ฎ๐ญ! ๐๐ก๐ข๐ฌ ๐ฐ๐๐๐ค'๐ฌ ๐๐จ๐ฅ๐ฅ๐๐๐ญ๐ข๐จ๐ง ๐ข๐ง๐๐ฅ๐ฎ๐๐๐ฌ: > Self-Healing Microservices: Implementing Health Checks with Spring Boot and Kubernetes > JEP targeted to JDK 25: 520: JFR Method Timing & Tracing > Agent Memory with Spring AI & Redis > A Sneak Peek at the Stable Values API > Java 22 to 24: Level up your Java Code by embracing new features in a safe way > Spring Cloud Stream: Event-Driven Architecture โ Part 1 > Undocumented Java 16 Feature: The End-of-File Comment > Service Mesh in Java: Istio and Linkerd Integration for Secure Microservices ๐๐ก๐๐๐ค ๐จ๐ฎ๐ญ ๐ญ๐ก๐ ๐ง๐๐ฐ๐ฌ๐ฅ๐๐ญ๐ญ๐๐ซ ๐๐ง๐ ๐ฌ๐ฎ๐๐ฌ๐๐ซ๐ข๐๐ ๐๐จ๐ซ ๐ฐ๐๐๐ค๐ฅ๐ฒ ๐ฎ๐ฉ๐๐๐ญ๐๐ฌ: https://javabulletin.substack.com/p/engineering-with-java-digest-57 #java #spring #newsletter #springboot <!-- SC_ON --> submitted by /u/Educational-Ad2036 (https://www.reddit.com/user/Educational-Ad2036)
[link] (https://javabulletin.substack.com/p/engineering-with-java-digest-57) [comments] (https://www.reddit.com/r/programming/comments/1mavu0o/engineering_with_java_digest_57/)
Just completed the CS Girlies โAI vs H.I.โ hackathon and this is what I want to tell my girlies
https://www.reddit.com/r/programming/comments/1mavvj1/just_completed_the_cs_girlies_ai_vs_hi_hackathon/
<!-- SC_OFF -->This month, I came across a post from CS Girlies, whom I genuinely idealize (following Michelle for an year). Just wrapped it Up and I must say, this experience boosted my confidence and programming skills both. Thanks to my amazing team for working so hard in this hackathon. What I want you to takeaway from this post: As a woman in CS, Iโve often felt like I needed to prove myself but no opportunity felt right to me or I was too hesitant maybe. But remember, that's not the case. I was afraid to take part in hackathons, though I have been making projects for a long time. Now when I saw a hackathon organized by girls, for the girls, I thought lets go! Turned out the best decision so far in my life. The mentors in discord and EVERYTHING was perfect. What we built:
My team (consisting of 5 girls) worked on a mood based arcade game. We made sure to make it US. Added everyone's ideas and It was cute, expressive, and totally โus,โ with a definite girlie touch.! Why You should Try it: The hackathon is designed by girls, for girls, and welcomes all experience levelsโno prior AI or hackathon background necessary. You should try it too. CS Girlies works incredibly hard to create spaces like this where girls can shine, learn, and build without needing prior experience. The tracks are beginner-friendly, creative, and emphasize emotion, intuition, and authenticity over optimization. <!-- SC_ON --> submitted by /u/Nervous_Lab_2401 (https://www.reddit.com/user/Nervous_Lab_2401)
[link] (https://www.csgirlies.com/hackathon) [comments] (https://www.reddit.com/r/programming/comments/1mavvj1/just_completed_the_cs_girlies_ai_vs_hi_hackathon/)
https://www.reddit.com/r/programming/comments/1mavvj1/just_completed_the_cs_girlies_ai_vs_hi_hackathon/
<!-- SC_OFF -->This month, I came across a post from CS Girlies, whom I genuinely idealize (following Michelle for an year). Just wrapped it Up and I must say, this experience boosted my confidence and programming skills both. Thanks to my amazing team for working so hard in this hackathon. What I want you to takeaway from this post: As a woman in CS, Iโve often felt like I needed to prove myself but no opportunity felt right to me or I was too hesitant maybe. But remember, that's not the case. I was afraid to take part in hackathons, though I have been making projects for a long time. Now when I saw a hackathon organized by girls, for the girls, I thought lets go! Turned out the best decision so far in my life. The mentors in discord and EVERYTHING was perfect. What we built:
My team (consisting of 5 girls) worked on a mood based arcade game. We made sure to make it US. Added everyone's ideas and It was cute, expressive, and totally โus,โ with a definite girlie touch.! Why You should Try it: The hackathon is designed by girls, for girls, and welcomes all experience levelsโno prior AI or hackathon background necessary. You should try it too. CS Girlies works incredibly hard to create spaces like this where girls can shine, learn, and build without needing prior experience. The tracks are beginner-friendly, creative, and emphasize emotion, intuition, and authenticity over optimization. <!-- SC_ON --> submitted by /u/Nervous_Lab_2401 (https://www.reddit.com/user/Nervous_Lab_2401)
[link] (https://www.csgirlies.com/hackathon) [comments] (https://www.reddit.com/r/programming/comments/1mavvj1/just_completed_the_cs_girlies_ai_vs_hi_hackathon/)
1 minute of Verlet Integration
https://www.reddit.com/r/programming/comments/1maw72t/1_minute_of_verlet_integration/
<!-- SC_OFF -->I've made a video recently on one of my favourite methods for solving Newton's equations. It is available on YouTube Shorts ๐ฅ It wasn't clear to me if this is worth a full article or just a short comment. Let me start with a supplementary material for the video first, and then we shall see... <!-- SC_ON --> submitted by /u/Inst2f (https://www.reddit.com/user/Inst2f)
[link] (https://wljs.io/blog/2025/07/27/verlet-supp/) [comments] (https://www.reddit.com/r/programming/comments/1maw72t/1_minute_of_verlet_integration/)
https://www.reddit.com/r/programming/comments/1maw72t/1_minute_of_verlet_integration/
<!-- SC_OFF -->I've made a video recently on one of my favourite methods for solving Newton's equations. It is available on YouTube Shorts ๐ฅ It wasn't clear to me if this is worth a full article or just a short comment. Let me start with a supplementary material for the video first, and then we shall see... <!-- SC_ON --> submitted by /u/Inst2f (https://www.reddit.com/user/Inst2f)
[link] (https://wljs.io/blog/2025/07/27/verlet-supp/) [comments] (https://www.reddit.com/r/programming/comments/1maw72t/1_minute_of_verlet_integration/)
Making Postgres 42,000x slower because I am unemployed
https://www.reddit.com/r/programming/comments/1maxelb/making_postgres_42000x_slower_because_i_am/
submitted by /u/AsyncBanana (https://www.reddit.com/user/AsyncBanana)
[link] (https://byteofdev.com/posts/making-postgres-slow/) [comments] (https://www.reddit.com/r/programming/comments/1maxelb/making_postgres_42000x_slower_because_i_am/)
https://www.reddit.com/r/programming/comments/1maxelb/making_postgres_42000x_slower_because_i_am/
submitted by /u/AsyncBanana (https://www.reddit.com/user/AsyncBanana)
[link] (https://byteofdev.com/posts/making-postgres-slow/) [comments] (https://www.reddit.com/r/programming/comments/1maxelb/making_postgres_42000x_slower_because_i_am/)
I used Qwen3-Coder to generate functional web apps from scratch
https://www.reddit.com/r/programming/comments/1may2tg/i_used_qwen3coder_to_generate_functional_web_apps/
submitted by /u/Few-Sorbet5722 (https://www.reddit.com/user/Few-Sorbet5722)
[link] (https://youtu.be/l65aOfy4NgQ) [comments] (https://www.reddit.com/r/programming/comments/1may2tg/i_used_qwen3coder_to_generate_functional_web_apps/)
https://www.reddit.com/r/programming/comments/1may2tg/i_used_qwen3coder_to_generate_functional_web_apps/
submitted by /u/Few-Sorbet5722 (https://www.reddit.com/user/Few-Sorbet5722)
[link] (https://youtu.be/l65aOfy4NgQ) [comments] (https://www.reddit.com/r/programming/comments/1may2tg/i_used_qwen3coder_to_generate_functional_web_apps/)
Reverse Proxy Deep Dive (Part 3): The Hidden Complexity of Service Discovery
https://www.reddit.com/r/programming/comments/1mb402l/reverse_proxy_deep_dive_part_3_the_hidden/
<!-- SC_OFF -->Iโm sharing Part 3 of a series exploring the internals of reverse proxies at scale. This post dives into service discovery, a problem that sounds straightforward but reveals many hidden challenges in dynamic environments. Topics covered include: static host lists, DNS-based discovery with TTL tradeoffs, external systems like ZooKeeper and Envoyโs xDS, and active vs passive health checks. The post also discusses real-world problems like DNS size limits and health check storms. If youโve worked on service discovery or proxy infrastructure, Iโd love to hear your experiences or thoughts. Full post here (about 10 minutes): https://startwithawhy.com/reverseproxy/2025/07/26/Reverseproxy-Deep-Dive-Part3.html
Parts 1 and 2 cover connection management and HTTP parsing. <!-- SC_ON --> submitted by /u/MiggyIshu (https://www.reddit.com/user/MiggyIshu)
[link] (https://startwithawhy.com/reverseproxy/2025/07/26/Reverseproxy-Deep-Dive-Part3.html) [comments] (https://www.reddit.com/r/programming/comments/1mb402l/reverse_proxy_deep_dive_part_3_the_hidden/)
https://www.reddit.com/r/programming/comments/1mb402l/reverse_proxy_deep_dive_part_3_the_hidden/
<!-- SC_OFF -->Iโm sharing Part 3 of a series exploring the internals of reverse proxies at scale. This post dives into service discovery, a problem that sounds straightforward but reveals many hidden challenges in dynamic environments. Topics covered include: static host lists, DNS-based discovery with TTL tradeoffs, external systems like ZooKeeper and Envoyโs xDS, and active vs passive health checks. The post also discusses real-world problems like DNS size limits and health check storms. If youโve worked on service discovery or proxy infrastructure, Iโd love to hear your experiences or thoughts. Full post here (about 10 minutes): https://startwithawhy.com/reverseproxy/2025/07/26/Reverseproxy-Deep-Dive-Part3.html
Parts 1 and 2 cover connection management and HTTP parsing. <!-- SC_ON --> submitted by /u/MiggyIshu (https://www.reddit.com/user/MiggyIshu)
[link] (https://startwithawhy.com/reverseproxy/2025/07/26/Reverseproxy-Deep-Dive-Part3.html) [comments] (https://www.reddit.com/r/programming/comments/1mb402l/reverse_proxy_deep_dive_part_3_the_hidden/)
Throttle Doctor: Interactive JS Event Handling
https://www.reddit.com/r/programming/comments/1mb641i/throttle_doctor_interactive_js_event_handling/
<!-- SC_OFF -->Hey r/javascript (https://www.reddit.com/r/javascript), I've built Throttle Doctor, an interactive app to help you visually understand and fine-tune event handling in JavaScript. If you've ever struggled with performance due to rapid-fire events (like mouse moves or scroll events), this tool is for you. What it does: It's a sandbox for experimenting with debounce and throttle techniques. You can adjust parameters like wait time, leading edge, and trailing edge to see their immediate impact on function execution, helping you optimize your code and prevent "event overload." Why it's useful: See it in action: Visualizes how debouncing and throttling control function calls. Learn by doing: Tweak settings and observe real-time results. Optimize performance: Understand how to prevent unnecessary executions. Try the live demo: https://duroktar.github.io/ThrottleDoctor/ Check out the code: https://github.com/Duroktar/ThrottleDoctor Note: This app showcases a throttleDebounce function, but a standalone library is not yet released. It's a proof-of-concept, and a library will be considered based on demand. Let me know your thoughts! Disclaimer: This post was created with AI assistance. The project was primarily vibe-coded, with minimal user tweaks <!-- SC_ON --> submitted by /u/Duroktar (https://www.reddit.com/user/Duroktar)
[link] (https://duroktar.github.io/ThrottleDoctor/) [comments] (https://www.reddit.com/r/programming/comments/1mb641i/throttle_doctor_interactive_js_event_handling/)
https://www.reddit.com/r/programming/comments/1mb641i/throttle_doctor_interactive_js_event_handling/
<!-- SC_OFF -->Hey r/javascript (https://www.reddit.com/r/javascript), I've built Throttle Doctor, an interactive app to help you visually understand and fine-tune event handling in JavaScript. If you've ever struggled with performance due to rapid-fire events (like mouse moves or scroll events), this tool is for you. What it does: It's a sandbox for experimenting with debounce and throttle techniques. You can adjust parameters like wait time, leading edge, and trailing edge to see their immediate impact on function execution, helping you optimize your code and prevent "event overload." Why it's useful: See it in action: Visualizes how debouncing and throttling control function calls. Learn by doing: Tweak settings and observe real-time results. Optimize performance: Understand how to prevent unnecessary executions. Try the live demo: https://duroktar.github.io/ThrottleDoctor/ Check out the code: https://github.com/Duroktar/ThrottleDoctor Note: This app showcases a throttleDebounce function, but a standalone library is not yet released. It's a proof-of-concept, and a library will be considered based on demand. Let me know your thoughts! Disclaimer: This post was created with AI assistance. The project was primarily vibe-coded, with minimal user tweaks <!-- SC_ON --> submitted by /u/Duroktar (https://www.reddit.com/user/Duroktar)
[link] (https://duroktar.github.io/ThrottleDoctor/) [comments] (https://www.reddit.com/r/programming/comments/1mb641i/throttle_doctor_interactive_js_event_handling/)
I fine-tuned an SLM -- here's what helped me get good results (and other learnings)
https://www.reddit.com/r/programming/comments/1mb7khe/i_finetuned_an_slm_heres_what_helped_me_get_good/
<!-- SC_OFF -->This weekend I fine-tuned the Qwen-3 0.6B model. I wanted a very lightweight model that can classify whether any user query going into my AI agents is a malicious prompt attack. I started by creating a dataset of 4000+ malicious queries using GPT-4o. I also added in a dataset of the same number of harmless queries. Attempt 1: Using this dataset, I ran SFT on the base version of the SLM on the queries. The resulting model was unusable, classifying every query as malicious. Attempt 2: I fine-tuned Qwen/Qwen3-0.6B instead, and this time spent more time prompt-tuning the instructions too. This gave me slightly improved accuracy but I noticed that it struggled at edge cases. eg, if a harmless prompt contains the term "System prompt", it gets flagged too. I realised I might need Chain of Thought to get there. I decided to start off by making the model start off with just one sentence of reasoning behind its prediction. Attempt 3: I created a new dataset, this time adding reasoning behind each malicious query. I fine-tuned the model on it again. It was an Aha! moment -- the model runs very accurately and I'm happy with the results. Planning to use this as a middleware between users and AI agents I build. <!-- SC_ON --> submitted by /u/sarthakai (https://www.reddit.com/user/sarthakai)
[link] (https://github.com/sarthakrastogi/rival) [comments] (https://www.reddit.com/r/programming/comments/1mb7khe/i_finetuned_an_slm_heres_what_helped_me_get_good/)
https://www.reddit.com/r/programming/comments/1mb7khe/i_finetuned_an_slm_heres_what_helped_me_get_good/
<!-- SC_OFF -->This weekend I fine-tuned the Qwen-3 0.6B model. I wanted a very lightweight model that can classify whether any user query going into my AI agents is a malicious prompt attack. I started by creating a dataset of 4000+ malicious queries using GPT-4o. I also added in a dataset of the same number of harmless queries. Attempt 1: Using this dataset, I ran SFT on the base version of the SLM on the queries. The resulting model was unusable, classifying every query as malicious. Attempt 2: I fine-tuned Qwen/Qwen3-0.6B instead, and this time spent more time prompt-tuning the instructions too. This gave me slightly improved accuracy but I noticed that it struggled at edge cases. eg, if a harmless prompt contains the term "System prompt", it gets flagged too. I realised I might need Chain of Thought to get there. I decided to start off by making the model start off with just one sentence of reasoning behind its prediction. Attempt 3: I created a new dataset, this time adding reasoning behind each malicious query. I fine-tuned the model on it again. It was an Aha! moment -- the model runs very accurately and I'm happy with the results. Planning to use this as a middleware between users and AI agents I build. <!-- SC_ON --> submitted by /u/sarthakai (https://www.reddit.com/user/sarthakai)
[link] (https://github.com/sarthakrastogi/rival) [comments] (https://www.reddit.com/r/programming/comments/1mb7khe/i_finetuned_an_slm_heres_what_helped_me_get_good/)
Scaling Node-RED for HTTP based flows
https://www.reddit.com/r/programming/comments/1mb7w3d/scaling_nodered_for_http_based_flows/
submitted by /u/Fried_Kachori (https://www.reddit.com/user/Fried_Kachori)
[link] (https://ahmadd.hashnode.dev/scaling-node-red-for-http-based-flows) [comments] (https://www.reddit.com/r/programming/comments/1mb7w3d/scaling_nodered_for_http_based_flows/)
https://www.reddit.com/r/programming/comments/1mb7w3d/scaling_nodered_for_http_based_flows/
submitted by /u/Fried_Kachori (https://www.reddit.com/user/Fried_Kachori)
[link] (https://ahmadd.hashnode.dev/scaling-node-red-for-http-based-flows) [comments] (https://www.reddit.com/r/programming/comments/1mb7w3d/scaling_nodered_for_http_based_flows/)
Learn System Design Fundamentals With Examples
https://www.reddit.com/r/programming/comments/1mb8ukk/learn_system_design_fundamentals_with_examples/
<!-- SC_OFF -->Learn System Design Fundamentals With Examples From CAP Theorem, Networking Basics, to Performance, Scalability, Availability, Security, Reliability etc. <!-- SC_ON --> submitted by /u/erdsingh24 (https://www.reddit.com/user/erdsingh24)
[link] (https://javatechonline.com/system-design-fundamentals/) [comments] (https://www.reddit.com/r/programming/comments/1mb8ukk/learn_system_design_fundamentals_with_examples/)
https://www.reddit.com/r/programming/comments/1mb8ukk/learn_system_design_fundamentals_with_examples/
<!-- SC_OFF -->Learn System Design Fundamentals With Examples From CAP Theorem, Networking Basics, to Performance, Scalability, Availability, Security, Reliability etc. <!-- SC_ON --> submitted by /u/erdsingh24 (https://www.reddit.com/user/erdsingh24)
[link] (https://javatechonline.com/system-design-fundamentals/) [comments] (https://www.reddit.com/r/programming/comments/1mb8ukk/learn_system_design_fundamentals_with_examples/)