Floodfill algorithm in Python with interactive demos
https://www.reddit.com/r/programming/comments/1p4no76/floodfill_algorithm_in_python_with_interactive/
<!-- SC_OFF -->I wrote this tutorial because I've always liked graph-related algorithms and I wanted to try my hand at writing something with interactive demos. This article teaches you how to implement and use the floodfill algorithm and includes interactive demos to: - use floodfill to colour regions in an image - step through the general floodfill algorithm step by step, with annotations of what the algorithm is doing - applying floodfill in a grid with obstacles to see how the starting point affects the process - use floodfill to count the number of disconnected regions in a grid - use a modified version of floodfill to simulate the fluid spreading over a surface with obstacles I know the internet can be relentless but I'm really looking forward to everyone's comments and suggestions, since I love interactive articles and I hope to be able to create more of these in the future. Happy reading and let me know what you think! The article: https://mathspp.com/blog/floodfill-algorithm-in-python <!-- SC_ON --> submitted by /u/RojerGS (https://www.reddit.com/user/RojerGS)
[link] (https://mathspp.com/blog/floodfill-algorithm-in-python) [comments] (https://www.reddit.com/r/programming/comments/1p4no76/floodfill_algorithm_in_python_with_interactive/)
https://www.reddit.com/r/programming/comments/1p4no76/floodfill_algorithm_in_python_with_interactive/
<!-- SC_OFF -->I wrote this tutorial because I've always liked graph-related algorithms and I wanted to try my hand at writing something with interactive demos. This article teaches you how to implement and use the floodfill algorithm and includes interactive demos to: - use floodfill to colour regions in an image - step through the general floodfill algorithm step by step, with annotations of what the algorithm is doing - applying floodfill in a grid with obstacles to see how the starting point affects the process - use floodfill to count the number of disconnected regions in a grid - use a modified version of floodfill to simulate the fluid spreading over a surface with obstacles I know the internet can be relentless but I'm really looking forward to everyone's comments and suggestions, since I love interactive articles and I hope to be able to create more of these in the future. Happy reading and let me know what you think! The article: https://mathspp.com/blog/floodfill-algorithm-in-python <!-- SC_ON --> submitted by /u/RojerGS (https://www.reddit.com/user/RojerGS)
[link] (https://mathspp.com/blog/floodfill-algorithm-in-python) [comments] (https://www.reddit.com/r/programming/comments/1p4no76/floodfill_algorithm_in_python_with_interactive/)
No, LLVM can't fix your code
https://www.reddit.com/r/programming/comments/1p4o2qa/no_llvm_cant_fix_your_code/
submitted by /u/Commission-Either (https://www.reddit.com/user/Commission-Either)
[link] (https://daymare.net/blogs/no-llvm-cant-fix-your-code/) [comments] (https://www.reddit.com/r/programming/comments/1p4o2qa/no_llvm_cant_fix_your_code/)
https://www.reddit.com/r/programming/comments/1p4o2qa/no_llvm_cant_fix_your_code/
submitted by /u/Commission-Either (https://www.reddit.com/user/Commission-Either)
[link] (https://daymare.net/blogs/no-llvm-cant-fix-your-code/) [comments] (https://www.reddit.com/r/programming/comments/1p4o2qa/no_llvm_cant_fix_your_code/)
Visualizing recursive merge sort with a recursive sequence diagram
https://www.reddit.com/r/programming/comments/1p4obh8/visualizing_recursive_merge_sort_with_a_recursive/
submitted by /u/Veuxdo (https://www.reddit.com/user/Veuxdo)
[link] (https://app.ilograph.com/demo.ilograph.Merge%2520Sort/Merge%2520Sort%2520Main) [comments] (https://www.reddit.com/r/programming/comments/1p4obh8/visualizing_recursive_merge_sort_with_a_recursive/)
https://www.reddit.com/r/programming/comments/1p4obh8/visualizing_recursive_merge_sort_with_a_recursive/
submitted by /u/Veuxdo (https://www.reddit.com/user/Veuxdo)
[link] (https://app.ilograph.com/demo.ilograph.Merge%2520Sort/Merge%2520Sort%2520Main) [comments] (https://www.reddit.com/r/programming/comments/1p4obh8/visualizing_recursive_merge_sort_with_a_recursive/)
Looking for partnership to create multiple micro-SaaS (trial and error, no attachment)
https://www.reddit.com/r/programming/comments/1p4pg4i/looking_for_partnership_to_create_multiple/
<!-- SC_OFF -->Hey guys! I'm a backend developer (Java and Python) and I recently launched a saas that ended up not getting any users. Instead of getting discouraged, I'm trying to regain the desire to test new ideas, without getting too attached to each project (as I did in the last one that I failed) so that the process is light and we can learn quickly with each attempt. The journey alone is very complicated, it's difficult to maintain motivation, focus and speed when you do everything alone, apart from the lack of time to study and keep the system progressing... That's why I'm looking for someone (or some people) to form a pair or small team. The idea is simple: test several microsaas or complete saas ideas, validate quickly, discard without drama and continue. You can be frontend, backend, mobile, designer, marketing… any area that can help. The important thing is to have a real desire to create, launch and test. If you are interested, comment here or send me a DM and we can exchange ideas ;) <!-- SC_ON --> submitted by /u/renanaq (https://www.reddit.com/user/renanaq)
[link] (https://inspiras.me/) [comments] (https://www.reddit.com/r/programming/comments/1p4pg4i/looking_for_partnership_to_create_multiple/)
https://www.reddit.com/r/programming/comments/1p4pg4i/looking_for_partnership_to_create_multiple/
<!-- SC_OFF -->Hey guys! I'm a backend developer (Java and Python) and I recently launched a saas that ended up not getting any users. Instead of getting discouraged, I'm trying to regain the desire to test new ideas, without getting too attached to each project (as I did in the last one that I failed) so that the process is light and we can learn quickly with each attempt. The journey alone is very complicated, it's difficult to maintain motivation, focus and speed when you do everything alone, apart from the lack of time to study and keep the system progressing... That's why I'm looking for someone (or some people) to form a pair or small team. The idea is simple: test several microsaas or complete saas ideas, validate quickly, discard without drama and continue. You can be frontend, backend, mobile, designer, marketing… any area that can help. The important thing is to have a real desire to create, launch and test. If you are interested, comment here or send me a DM and we can exchange ideas ;) <!-- SC_ON --> submitted by /u/renanaq (https://www.reddit.com/user/renanaq)
[link] (https://inspiras.me/) [comments] (https://www.reddit.com/r/programming/comments/1p4pg4i/looking_for_partnership_to_create_multiple/)
How revenue decisions shape technical debt
https://www.reddit.com/r/programming/comments/1p4qvfc/how_revenue_decisions_shape_technical_debt/
submitted by /u/ArtisticProgrammer11 (https://www.reddit.com/user/ArtisticProgrammer11)
[link] (https://www.hyperact.co.uk/blog/how-revenue-decisions-shape-technical-debt) [comments] (https://www.reddit.com/r/programming/comments/1p4qvfc/how_revenue_decisions_shape_technical_debt/)
https://www.reddit.com/r/programming/comments/1p4qvfc/how_revenue_decisions_shape_technical_debt/
submitted by /u/ArtisticProgrammer11 (https://www.reddit.com/user/ArtisticProgrammer11)
[link] (https://www.hyperact.co.uk/blog/how-revenue-decisions-shape-technical-debt) [comments] (https://www.reddit.com/r/programming/comments/1p4qvfc/how_revenue_decisions_shape_technical_debt/)
My first real Rust project
https://www.reddit.com/r/programming/comments/1p4slri/my_first_real_rust_project/
submitted by /u/nfrankel (https://www.reddit.com/user/nfrankel)
[link] (https://blog.frankel.ch/first-real-rust-project/) [comments] (https://www.reddit.com/r/programming/comments/1p4slri/my_first_real_rust_project/)
https://www.reddit.com/r/programming/comments/1p4slri/my_first_real_rust_project/
submitted by /u/nfrankel (https://www.reddit.com/user/nfrankel)
[link] (https://blog.frankel.ch/first-real-rust-project/) [comments] (https://www.reddit.com/r/programming/comments/1p4slri/my_first_real_rust_project/)
B-Trees: Why Every Database Uses Them
https://www.reddit.com/r/programming/comments/1p4ti19/btrees_why_every_database_uses_them/
submitted by /u/m3m3o (https://www.reddit.com/user/m3m3o)
[link] (https://mehmetgoekce.substack.com/p/b-trees-why-every-database-uses-them) [comments] (https://www.reddit.com/r/programming/comments/1p4ti19/btrees_why_every_database_uses_them/)
https://www.reddit.com/r/programming/comments/1p4ti19/btrees_why_every_database_uses_them/
submitted by /u/m3m3o (https://www.reddit.com/user/m3m3o)
[link] (https://mehmetgoekce.substack.com/p/b-trees-why-every-database-uses-them) [comments] (https://www.reddit.com/r/programming/comments/1p4ti19/btrees_why_every_database_uses_them/)
Alerts: You need a budget!
https://www.reddit.com/r/programming/comments/1p4uvhw/alerts_you_need_a_budget/
<!-- SC_OFF -->No matter the company, the domain, or the culture, I hear devops people complain about alert fatigue. This is not strange. Our work can be demanding and alerts can be a big cause of that "demand". What is strange, in my view, is that there is a general sense of defeatism when it comes to dealing with alert fatigue. Maybe some quick initiative here and there to clean up this or that. But status quo always returns. We have no structural solutions (that I've seen).
So let me try my hand at proposing a simple idea: budgeting. <!-- SC_ON --> submitted by /u/IEavan (https://www.reddit.com/user/IEavan)
[link] (https://eavan.blog/posts/alert-budgeting.html) [comments] (https://www.reddit.com/r/programming/comments/1p4uvhw/alerts_you_need_a_budget/)
https://www.reddit.com/r/programming/comments/1p4uvhw/alerts_you_need_a_budget/
<!-- SC_OFF -->No matter the company, the domain, or the culture, I hear devops people complain about alert fatigue. This is not strange. Our work can be demanding and alerts can be a big cause of that "demand". What is strange, in my view, is that there is a general sense of defeatism when it comes to dealing with alert fatigue. Maybe some quick initiative here and there to clean up this or that. But status quo always returns. We have no structural solutions (that I've seen).
So let me try my hand at proposing a simple idea: budgeting. <!-- SC_ON --> submitted by /u/IEavan (https://www.reddit.com/user/IEavan)
[link] (https://eavan.blog/posts/alert-budgeting.html) [comments] (https://www.reddit.com/r/programming/comments/1p4uvhw/alerts_you_need_a_budget/)
Human Capital Management Software (HCM): Why Modern Businesses Can’t Survive Without It in 2025
https://www.reddit.com/r/programming/comments/1p5a6v3/human_capital_management_software_hcm_why_modern/
<!-- SC_OFF -->In 2025, HR operations have officially moved beyond spreadsheets and traditional HRMS tools. Hybrid work, compliance pressure, rapid hiring cycles, and AI-driven workforce analytics are pushing companies toward smarter automation. I put together a complete guide covering: What HCM software actually is Why companies are switching from HRM to HCM Core modules every modern HCM must have How AI is transforming recruitment, performance, and employee engagement Development cost breakdown (basic → advanced AI systems) Why custom HCM is becoming the preferred choice over ready-made tools When to build vs. buy Examples of modern HCM capabilities If you're in HR, tech, software development, or building SaaS products — this guide will give you a clear understanding of how HCM is evolving and why it matters. 👉 Read the full guide : Click on the Link Would love to hear feedback from SaaS founders, HR managers, and dev teams using HCM or building something similar. <!-- SC_ON --> submitted by /u/Big-Click2648 (https://www.reddit.com/user/Big-Click2648)
[link] (https://codevian.com/blog/modern-hcm-software-guide/) [comments] (https://www.reddit.com/r/programming/comments/1p5a6v3/human_capital_management_software_hcm_why_modern/)
https://www.reddit.com/r/programming/comments/1p5a6v3/human_capital_management_software_hcm_why_modern/
<!-- SC_OFF -->In 2025, HR operations have officially moved beyond spreadsheets and traditional HRMS tools. Hybrid work, compliance pressure, rapid hiring cycles, and AI-driven workforce analytics are pushing companies toward smarter automation. I put together a complete guide covering: What HCM software actually is Why companies are switching from HRM to HCM Core modules every modern HCM must have How AI is transforming recruitment, performance, and employee engagement Development cost breakdown (basic → advanced AI systems) Why custom HCM is becoming the preferred choice over ready-made tools When to build vs. buy Examples of modern HCM capabilities If you're in HR, tech, software development, or building SaaS products — this guide will give you a clear understanding of how HCM is evolving and why it matters. 👉 Read the full guide : Click on the Link Would love to hear feedback from SaaS founders, HR managers, and dev teams using HCM or building something similar. <!-- SC_ON --> submitted by /u/Big-Click2648 (https://www.reddit.com/user/Big-Click2648)
[link] (https://codevian.com/blog/modern-hcm-software-guide/) [comments] (https://www.reddit.com/r/programming/comments/1p5a6v3/human_capital_management_software_hcm_why_modern/)
Why "Start Simple" Should Be Your Default in the AI-Assisted Development Era
https://www.reddit.com/r/programming/comments/1p5b0z3/why_start_simple_should_be_your_default_in_the/
<!-- SC_OFF -->A case for resisting over-engineered AI-generated architectures and instead beginning projects with the smallest viable design. Simple, explicit code provides tighter threat surfaces, faster debugging, and far fewer hidden abstractions that developers only partially understand. Before letting AI optimize anything, build the clear, boring version first so you know what the system actually does and can reason about it when things break. <!-- SC_ON --> submitted by /u/AWildMonomAppears (https://www.reddit.com/user/AWildMonomAppears)
[link] (https://practicalsecurity.substack.com/p/why-starting-simple-is-your-secret) [comments] (https://www.reddit.com/r/programming/comments/1p5b0z3/why_start_simple_should_be_your_default_in_the/)
https://www.reddit.com/r/programming/comments/1p5b0z3/why_start_simple_should_be_your_default_in_the/
<!-- SC_OFF -->A case for resisting over-engineered AI-generated architectures and instead beginning projects with the smallest viable design. Simple, explicit code provides tighter threat surfaces, faster debugging, and far fewer hidden abstractions that developers only partially understand. Before letting AI optimize anything, build the clear, boring version first so you know what the system actually does and can reason about it when things break. <!-- SC_ON --> submitted by /u/AWildMonomAppears (https://www.reddit.com/user/AWildMonomAppears)
[link] (https://practicalsecurity.substack.com/p/why-starting-simple-is-your-secret) [comments] (https://www.reddit.com/r/programming/comments/1p5b0z3/why_start_simple_should_be_your_default_in_the/)
Celebrate fire preventers, not just firefighters. The stories you praise shape your culture. Choose heroes who build systems, not chaos.
https://www.reddit.com/r/programming/comments/1p5cqmt/celebrate_fire_preventers_not_just_firefighters/
submitted by /u/goto-con (https://www.reddit.com/user/goto-con)
[link] (https://youtube.com/shorts/WuDUJsNNlSM) [comments] (https://www.reddit.com/r/programming/comments/1p5cqmt/celebrate_fire_preventers_not_just_firefighters/)
https://www.reddit.com/r/programming/comments/1p5cqmt/celebrate_fire_preventers_not_just_firefighters/
submitted by /u/goto-con (https://www.reddit.com/user/goto-con)
[link] (https://youtube.com/shorts/WuDUJsNNlSM) [comments] (https://www.reddit.com/r/programming/comments/1p5cqmt/celebrate_fire_preventers_not_just_firefighters/)
Finly - Closing the Gap Between Schema-First and Code-First
https://www.reddit.com/r/programming/comments/1p5dh2b/finly_closing_the_gap_between_schemafirst_and/
submitted by /u/Dan6erbond2 (https://www.reddit.com/user/Dan6erbond2)
[link] (https://finly.ch/engineering-blog/350169-closing-the-gap-between-schema-first-and-code-first-graphql-development) [comments] (https://www.reddit.com/r/programming/comments/1p5dh2b/finly_closing_the_gap_between_schemafirst_and/)
https://www.reddit.com/r/programming/comments/1p5dh2b/finly_closing_the_gap_between_schemafirst_and/
submitted by /u/Dan6erbond2 (https://www.reddit.com/user/Dan6erbond2)
[link] (https://finly.ch/engineering-blog/350169-closing-the-gap-between-schema-first-and-code-first-graphql-development) [comments] (https://www.reddit.com/r/programming/comments/1p5dh2b/finly_closing_the_gap_between_schemafirst_and/)
TLS Handshake Latency: When Your Load Balancer Becomes a Bottleneck
https://www.reddit.com/r/programming/comments/1p5f7rq/tls_handshake_latency_when_your_load_balancer/
<!-- SC_OFF -->Most engineers think of TLS as network overhead - a few extra round trips that add maybe 50-100ms. But here’s what actually happens: when your load balancer receives a new HTTPS connection, it needs to perform CPU-intensive cryptographic operations. We’re talking RSA signature verification, ECDHE key exchange calculations, and symmetric key derivation. On a quiet Tuesday morning, each handshake takes 20-30ms. During a traffic spike? That same handshake can take 5 seconds. The culprit is queueing. Your load balancer has a fixed number of worker threads handling TLS operations. When requests arrive faster than workers can process them, they queue up. Now you’re not just dealing with the crypto overhead - you’re dealing with wait time in a saturated queue. I’ve seen production load balancers at major tech companies go from 50ms p99 handshake latency to 8 seconds during deployment events when thousands of connections need re-establishment simultaneously. https://systemdr.substack.com/p/tls-handshake-latency-when-your-load https://github.com/sysdr/sdir/tree/main/tls_handshake <!-- SC_ON --> submitted by /u/Extra_Ear_10 (https://www.reddit.com/user/Extra_Ear_10)
[link] (https://systemdr.substack.com/p/tls-handshake-latency-when-your-load) [comments] (https://www.reddit.com/r/programming/comments/1p5f7rq/tls_handshake_latency_when_your_load_balancer/)
https://www.reddit.com/r/programming/comments/1p5f7rq/tls_handshake_latency_when_your_load_balancer/
<!-- SC_OFF -->Most engineers think of TLS as network overhead - a few extra round trips that add maybe 50-100ms. But here’s what actually happens: when your load balancer receives a new HTTPS connection, it needs to perform CPU-intensive cryptographic operations. We’re talking RSA signature verification, ECDHE key exchange calculations, and symmetric key derivation. On a quiet Tuesday morning, each handshake takes 20-30ms. During a traffic spike? That same handshake can take 5 seconds. The culprit is queueing. Your load balancer has a fixed number of worker threads handling TLS operations. When requests arrive faster than workers can process them, they queue up. Now you’re not just dealing with the crypto overhead - you’re dealing with wait time in a saturated queue. I’ve seen production load balancers at major tech companies go from 50ms p99 handshake latency to 8 seconds during deployment events when thousands of connections need re-establishment simultaneously. https://systemdr.substack.com/p/tls-handshake-latency-when-your-load https://github.com/sysdr/sdir/tree/main/tls_handshake <!-- SC_ON --> submitted by /u/Extra_Ear_10 (https://www.reddit.com/user/Extra_Ear_10)
[link] (https://systemdr.substack.com/p/tls-handshake-latency-when-your-load) [comments] (https://www.reddit.com/r/programming/comments/1p5f7rq/tls_handshake_latency_when_your_load_balancer/)
Read-Through vs Write-Through Cache
https://www.reddit.com/r/programming/comments/1p5fh9z/readthrough_vs_writethrough_cache/
submitted by /u/stmoreau (https://www.reddit.com/user/stmoreau)
[link] (https://www.systemdesignbutsimple.com/p/read-through-vs-write-through-cache) [comments] (https://www.reddit.com/r/programming/comments/1p5fh9z/readthrough_vs_writethrough_cache/)
https://www.reddit.com/r/programming/comments/1p5fh9z/readthrough_vs_writethrough_cache/
submitted by /u/stmoreau (https://www.reddit.com/user/stmoreau)
[link] (https://www.systemdesignbutsimple.com/p/read-through-vs-write-through-cache) [comments] (https://www.reddit.com/r/programming/comments/1p5fh9z/readthrough_vs_writethrough_cache/)
Shai-Hulud Second Coming: Software Supply Chain Attack Exposing Code and Harvesting Credentials
https://www.reddit.com/r/programming/comments/1p5g2ac/shaihulud_second_coming_software_supply_chain/
<!-- SC_OFF -->The Shai-Hulud attackers are back with a new supply chain attack targeting the npm ecosystem. Multiple popular packages were infected with malicious payload via preinstall script. The attack is in progress. Some of the indicators include: Download and installation of bun Executing bun_environment.js using bun Credentials stolen from infected machines and CI/CD are being exposed through GitHub public repositories. https://github.com/search?q=%22Sha1-Hulud%3A%20The%20Second%20Coming%22&type=repositories <!-- SC_ON --> submitted by /u/N1ghtCod3r (https://www.reddit.com/user/N1ghtCod3r)
[link] (https://safedep.io/shai-hulud-second-coming-supply-chain-attack/) [comments] (https://www.reddit.com/r/programming/comments/1p5g2ac/shaihulud_second_coming_software_supply_chain/)
https://www.reddit.com/r/programming/comments/1p5g2ac/shaihulud_second_coming_software_supply_chain/
<!-- SC_OFF -->The Shai-Hulud attackers are back with a new supply chain attack targeting the npm ecosystem. Multiple popular packages were infected with malicious payload via preinstall script. The attack is in progress. Some of the indicators include: Download and installation of bun Executing bun_environment.js using bun Credentials stolen from infected machines and CI/CD are being exposed through GitHub public repositories. https://github.com/search?q=%22Sha1-Hulud%3A%20The%20Second%20Coming%22&type=repositories <!-- SC_ON --> submitted by /u/N1ghtCod3r (https://www.reddit.com/user/N1ghtCod3r)
[link] (https://safedep.io/shai-hulud-second-coming-supply-chain-attack/) [comments] (https://www.reddit.com/r/programming/comments/1p5g2ac/shaihulud_second_coming_software_supply_chain/)
How many HTTP requests/second can a Single Machine handle?
https://www.reddit.com/r/programming/comments/1p5gins/how_many_http_requestssecond_can_a_single_machine/
<!-- SC_OFF -->When designing systems and deciding on the architecture, the use of microservices and other complex solutions is often justified on the basis of predicted performance and scalability needs. Out of curiosity then, I decided to tests the performance limits of an extremely simple approach, the simplest possible one: A single instance of an application, with a single instance of a database, deployed to a single machine. To resemble real-world use cases as much as possible, we have the following: Java 21-based REST API built with Spring Boot 3 and using Virtual Threads PostgreSQL as a database, loaded with over one million rows of data External volume for the database - it does not write to the local file system Realistic load characteristics: tests consist primarily of read requests with approximately 20% of writes. They call our REST API which makes use of the PostgreSQL database with a reasonable amount of data (over one million rows) Single Machine in a few versions: 1 CPU, 2 GB of memory 2 CPUs, 4 GB of memory 4 CPUs, 8 GB of memory Single LoadTest file as a testing tool - running on 4 test machines, in parallel, since we usually have many HTTP clients, not just one Everything built and running in Docker DigitalOcean as the infrastructure provider As we can see the results at the bottom: a single machine, with a single database, can handle a lot - way more than most of us will ever need. Unless we have extreme load and performance needs, microservices serve mostly as an organizational tool, allowing many teams to work in parallel more easily. Performance doesn't justify them. The results: Small machine - 1 CPU, 2 GB of memory Can handle sustained load of 200 - 300 RPS For 15 seconds, it was able to handle 1000 RPS with stats: Min: 0.001s, Max: 0.2s, Mean: 0.013s Percentile 90: 0.026s, Percentile 95: 0.034s Percentile 99: 0.099s Medium machine - 2 CPUs, 4 GB of memory Can handle sustained load of 500 - 1000 RPS For 15 seconds, it was able to handle 1000 RPS with stats: Min: 0.001s, Max: 0.135s, Mean: 0.004s Percentile 90: 0.007s, Percentile 95: 0.01s Percentile 99: 0.023s Large machine - 4 CPUs, 8 GB of memory Can handle sustained load of 2000 - 3000 RPS For 15 seconds, it was able to handle 4000 RPS with stats: Min: 0.0s, (less than 1ms), Max: 1.05s, Mean: 0.058s Percentile 90: 0.124s, Percentile 95: 0.353s Percentile 99: 0.746s Huge machine - 8 CPUs, 16 GB of memory (not tested) Most likely can handle sustained load of 4000 - 6000 RPS <!-- SC_ON --> submitted by /u/BinaryIgor (https://www.reddit.com/user/BinaryIgor)
[link] (https://binaryigor.com/how-many-http-requests-can-a-single-machine-handle.html) [comments] (https://www.reddit.com/r/programming/comments/1p5gins/how_many_http_requestssecond_can_a_single_machine/)
https://www.reddit.com/r/programming/comments/1p5gins/how_many_http_requestssecond_can_a_single_machine/
<!-- SC_OFF -->When designing systems and deciding on the architecture, the use of microservices and other complex solutions is often justified on the basis of predicted performance and scalability needs. Out of curiosity then, I decided to tests the performance limits of an extremely simple approach, the simplest possible one: A single instance of an application, with a single instance of a database, deployed to a single machine. To resemble real-world use cases as much as possible, we have the following: Java 21-based REST API built with Spring Boot 3 and using Virtual Threads PostgreSQL as a database, loaded with over one million rows of data External volume for the database - it does not write to the local file system Realistic load characteristics: tests consist primarily of read requests with approximately 20% of writes. They call our REST API which makes use of the PostgreSQL database with a reasonable amount of data (over one million rows) Single Machine in a few versions: 1 CPU, 2 GB of memory 2 CPUs, 4 GB of memory 4 CPUs, 8 GB of memory Single LoadTest file as a testing tool - running on 4 test machines, in parallel, since we usually have many HTTP clients, not just one Everything built and running in Docker DigitalOcean as the infrastructure provider As we can see the results at the bottom: a single machine, with a single database, can handle a lot - way more than most of us will ever need. Unless we have extreme load and performance needs, microservices serve mostly as an organizational tool, allowing many teams to work in parallel more easily. Performance doesn't justify them. The results: Small machine - 1 CPU, 2 GB of memory Can handle sustained load of 200 - 300 RPS For 15 seconds, it was able to handle 1000 RPS with stats: Min: 0.001s, Max: 0.2s, Mean: 0.013s Percentile 90: 0.026s, Percentile 95: 0.034s Percentile 99: 0.099s Medium machine - 2 CPUs, 4 GB of memory Can handle sustained load of 500 - 1000 RPS For 15 seconds, it was able to handle 1000 RPS with stats: Min: 0.001s, Max: 0.135s, Mean: 0.004s Percentile 90: 0.007s, Percentile 95: 0.01s Percentile 99: 0.023s Large machine - 4 CPUs, 8 GB of memory Can handle sustained load of 2000 - 3000 RPS For 15 seconds, it was able to handle 4000 RPS with stats: Min: 0.0s, (less than 1ms), Max: 1.05s, Mean: 0.058s Percentile 90: 0.124s, Percentile 95: 0.353s Percentile 99: 0.746s Huge machine - 8 CPUs, 16 GB of memory (not tested) Most likely can handle sustained load of 4000 - 6000 RPS <!-- SC_ON --> submitted by /u/BinaryIgor (https://www.reddit.com/user/BinaryIgor)
[link] (https://binaryigor.com/how-many-http-requests-can-a-single-machine-handle.html) [comments] (https://www.reddit.com/r/programming/comments/1p5gins/how_many_http_requestssecond_can_a_single_machine/)
A bug fixing journey when writing a C++ Code Search Engine: std::string is not that simple
https://www.reddit.com/r/programming/comments/1p5h0c4/a_bug_fixing_journey_when_writing_a_c_code_search/
<!-- SC_OFF -->Hi everyone, I built a code search engine called Coogle (inspired by Haskell's Hoogle) to help navigate our massive legacy C/C++ codebase. While building the parser, I ran into a confusing bug where I couldn't find functions returning std::string. It turned out std::string doesn't really exist in the AST—it's a typedef for a template monster. I wrote a blog post about: Why C's char type is tricky (it's a byte, not a character). How std::string works under the hood How std::string_view is so similar to the Linux Kernel's qstr. Link: Back to Basics: From C char to string_view (Notes from building Coogle) (https://thecloudlet.github.io/blog/cpp/cpp-string/) If you are building dev tools or indexers, hopefully, this saves you some debug time. <!-- SC_ON --> submitted by /u/ypaskell (https://www.reddit.com/user/ypaskell)
[link] (https://thecloudlet.github.io/blog/cpp/cpp-string/) [comments] (https://www.reddit.com/r/programming/comments/1p5h0c4/a_bug_fixing_journey_when_writing_a_c_code_search/)
https://www.reddit.com/r/programming/comments/1p5h0c4/a_bug_fixing_journey_when_writing_a_c_code_search/
<!-- SC_OFF -->Hi everyone, I built a code search engine called Coogle (inspired by Haskell's Hoogle) to help navigate our massive legacy C/C++ codebase. While building the parser, I ran into a confusing bug where I couldn't find functions returning std::string. It turned out std::string doesn't really exist in the AST—it's a typedef for a template monster. I wrote a blog post about: Why C's char type is tricky (it's a byte, not a character). How std::string works under the hood How std::string_view is so similar to the Linux Kernel's qstr. Link: Back to Basics: From C char to string_view (Notes from building Coogle) (https://thecloudlet.github.io/blog/cpp/cpp-string/) If you are building dev tools or indexers, hopefully, this saves you some debug time. <!-- SC_ON --> submitted by /u/ypaskell (https://www.reddit.com/user/ypaskell)
[link] (https://thecloudlet.github.io/blog/cpp/cpp-string/) [comments] (https://www.reddit.com/r/programming/comments/1p5h0c4/a_bug_fixing_journey_when_writing_a_c_code_search/)
Shaders
https://www.reddit.com/r/programming/comments/1p5i0o4/shaders/
submitted by /u/DifficultSecretary22 (https://www.reddit.com/user/DifficultSecretary22)
[link] (https://www.makingsoftware.com/chapters/shaders) [comments] (https://www.reddit.com/r/programming/comments/1p5i0o4/shaders/)
https://www.reddit.com/r/programming/comments/1p5i0o4/shaders/
submitted by /u/DifficultSecretary22 (https://www.reddit.com/user/DifficultSecretary22)
[link] (https://www.makingsoftware.com/chapters/shaders) [comments] (https://www.reddit.com/r/programming/comments/1p5i0o4/shaders/)
Sha1-Hulud The Second Comming - Postman, Zapier, PostHog all compromised via NPM
https://www.reddit.com/r/programming/comments/1p5i31d/sha1hulud_the_second_comming_postman_zapier/
<!-- SC_OFF -->In September, a self-propagating worm called Sha1-Hulud came into action. A new version is now spreading and it is much much worse! Link: https://www.aikido.dev/blog/shai-hulud-strikes-again-hitting-zapier-ensdomains The mechanics are basically the same, It infected NPM packages with stolen developer tokens. The malware uses preinstall script to run malware on a victim machine, scans for secrets, steals them and publishes them on GitHub in a public repository. It then uses stolen NPM tokens to infect more packages. In September, it never made critical mass... But now it looks like it has. So far, over 28,000 GitHub repositories have been made with the description "Sha1-Hulud: The Second Coming". These repos have the stolen secrets inside them encoded in Base64. https://github.com/search?q=Sha1-Hulud%3A+The+Second+Coming&ref=opensearch&type=repositories We first published about this after our discover at 09:25 CET but it has since got much worse. https://x.com/AikidoSecurity/status/1992872292745888025 At the start, the most significant compromise was Zapier (we still think this is the most likely first seed), but as the propagation started to pick up steam, we quickly saw other big names like PostMan and PostHog also fall. Technical details of the attack The malicious packages execute code in the preinstall lifecycle script. Payload names include files like setup_bun.js and bun_environment.js. On infection, the malware: Registers the machine as a “self-hosted runner” named “SHA1HULUD” and injects a GitHub Actions workflow (.github/workflows/discussion.yaml) to allow arbitrary commands via GitHub discussions. Exfiltrates secrets via another workflow (formatter_123456789.yml) that uploads secrets as artifacts, then deletes traces (branch & workflow) to hide. Targets cloud credentials across AWS, Azure, GCP: reads environment variables, metadata services, credentials files; tries privilege escalation (e.g., via Docker container breakout) and persistent access. Impact & Affected Package We are updating our blog as we go, at time of writing this its 425 packages covering 132 million weekly downloads total Compromised Zaiper Packages zapier/ai-actions zapier/ai-actions-react zapier/babel-preset-zapier zapier/browserslist-config-zapier zapier/eslint-plugin-zapier zapier/mcp-integration zapier/secret-scrubber zapier/spectral-api-ruleset zapier/stubtree zapier/zapier-sdk zapier-async-storage zapier-platform-cli zapier-platform-core zapier-platform-legacy-scripting-runner zapier-platform-schema zapier-scripts Compromised Postman Packages postman/aether-icons postman/csv-parse postman/final-node-keytar postman/mcp-ui-client postman/node-keytar postman/pm-bin-linux-x64 postman/pm-bin-macos-arm64 postman/pm-bin-macos-x64 postman/pm-bin-windows-x64 postman/postman-collection-fork postman/postman-mcp-cli postman/postman-mcp-server postman/pretty-ms postman/secret-scanner-wasm postman/tunnel-agent postman/wdio-allure-reporter postman/wdio-junit-reporter Compromised Post Hog Packages posthog/agent posthog/ai posthog/automatic-cohorts-plugin posthog/bitbucket-release-tracker posthog/cli posthog/clickhouse posthog/core posthog/currency-normalization-plugin posthog/customerio-plugin posthog/databricks-plugin posthog/drop-events-on-property-plugin posthog/event-sequence-timer-plugin posthog/filter-out-plugin posthog/first-time-event-tracker posthog/geoip-plugin posthog/github-release-tracking-plugin posthog/gitub-star-sync-plugin posthog/heartbeat-plugin posthog/hedgehog-mode posthog/icons posthog/ingestion-alert-plugin posthog/intercom-plugin posthog/kinesis-plugin posthog/laudspeaker-plugin posthog/lemon-ui posthog/maxmind-plugin posthog/migrator3000-plugin posthog/netdata-event-processing posthog/nextjs posthog/nextjs-config posthog/nuxt
https://www.reddit.com/r/programming/comments/1p5i31d/sha1hulud_the_second_comming_postman_zapier/
<!-- SC_OFF -->In September, a self-propagating worm called Sha1-Hulud came into action. A new version is now spreading and it is much much worse! Link: https://www.aikido.dev/blog/shai-hulud-strikes-again-hitting-zapier-ensdomains The mechanics are basically the same, It infected NPM packages with stolen developer tokens. The malware uses preinstall script to run malware on a victim machine, scans for secrets, steals them and publishes them on GitHub in a public repository. It then uses stolen NPM tokens to infect more packages. In September, it never made critical mass... But now it looks like it has. So far, over 28,000 GitHub repositories have been made with the description "Sha1-Hulud: The Second Coming". These repos have the stolen secrets inside them encoded in Base64. https://github.com/search?q=Sha1-Hulud%3A+The+Second+Coming&ref=opensearch&type=repositories We first published about this after our discover at 09:25 CET but it has since got much worse. https://x.com/AikidoSecurity/status/1992872292745888025 At the start, the most significant compromise was Zapier (we still think this is the most likely first seed), but as the propagation started to pick up steam, we quickly saw other big names like PostMan and PostHog also fall. Technical details of the attack The malicious packages execute code in the preinstall lifecycle script. Payload names include files like setup_bun.js and bun_environment.js. On infection, the malware: Registers the machine as a “self-hosted runner” named “SHA1HULUD” and injects a GitHub Actions workflow (.github/workflows/discussion.yaml) to allow arbitrary commands via GitHub discussions. Exfiltrates secrets via another workflow (formatter_123456789.yml) that uploads secrets as artifacts, then deletes traces (branch & workflow) to hide. Targets cloud credentials across AWS, Azure, GCP: reads environment variables, metadata services, credentials files; tries privilege escalation (e.g., via Docker container breakout) and persistent access. Impact & Affected Package We are updating our blog as we go, at time of writing this its 425 packages covering 132 million weekly downloads total Compromised Zaiper Packages zapier/ai-actions zapier/ai-actions-react zapier/babel-preset-zapier zapier/browserslist-config-zapier zapier/eslint-plugin-zapier zapier/mcp-integration zapier/secret-scrubber zapier/spectral-api-ruleset zapier/stubtree zapier/zapier-sdk zapier-async-storage zapier-platform-cli zapier-platform-core zapier-platform-legacy-scripting-runner zapier-platform-schema zapier-scripts Compromised Postman Packages postman/aether-icons postman/csv-parse postman/final-node-keytar postman/mcp-ui-client postman/node-keytar postman/pm-bin-linux-x64 postman/pm-bin-macos-arm64 postman/pm-bin-macos-x64 postman/pm-bin-windows-x64 postman/postman-collection-fork postman/postman-mcp-cli postman/postman-mcp-server postman/pretty-ms postman/secret-scanner-wasm postman/tunnel-agent postman/wdio-allure-reporter postman/wdio-junit-reporter Compromised Post Hog Packages posthog/agent posthog/ai posthog/automatic-cohorts-plugin posthog/bitbucket-release-tracker posthog/cli posthog/clickhouse posthog/core posthog/currency-normalization-plugin posthog/customerio-plugin posthog/databricks-plugin posthog/drop-events-on-property-plugin posthog/event-sequence-timer-plugin posthog/filter-out-plugin posthog/first-time-event-tracker posthog/geoip-plugin posthog/github-release-tracking-plugin posthog/gitub-star-sync-plugin posthog/heartbeat-plugin posthog/hedgehog-mode posthog/icons posthog/ingestion-alert-plugin posthog/intercom-plugin posthog/kinesis-plugin posthog/laudspeaker-plugin posthog/lemon-ui posthog/maxmind-plugin posthog/migrator3000-plugin posthog/netdata-event-processing posthog/nextjs posthog/nextjs-config posthog/nuxt
posthog/pagerduty-plugin posthog/piscina posthog/plugin-contrib posthog/plugin-server posthog/plugin-unduplicates posthog/postgres-plugin posthog/react-rrweb-player posthog/rrdom posthog/rrweb posthog/rrweb-player posthog/rrweb-record posthog/rrweb-replay posthog/rrweb-snapshot posthog/rrweb-utils posthog/sendgrid-plugin posthog/siphash posthog/snowflake-export-plugin posthog/taxonomy-plugin posthog/twilio-plugin posthog/twitter-followers-plugin posthog/url-normalizer-plugin posthog/variance-plugin posthog/web-dev-server posthog/wizard posthog/zendesk-plugin posthog-docusaurus posthog-js posthog-node posthog-plugin-hello-world posthog-react-native posthog-react-native-session-replay What to do if you’re impacted (or want to protect yourself) Search Immediately remove/replace any compromised packages. Clear npm cache (npm cache clean --force), delete node_modules, reinstall clean. (This will prevent reinfection) Rotate all credentials: npm tokens, GitHub PATs, SSH keys, cloud credentials. Enforce MFA (ideally phishing-resistant) for developers + CI/CD accounts. Audit GitHub & CI/CD pipelines: search for new repos with description “Sha1-Hulud: The Second Coming”, look for unauthorized workflows or commits, monitor for unexpected npm publishes. Implement something like Safe-Chain to prevent malicious packages from getting installed https://github.com/AikidoSec/safe-chain Links Blog Post: https://www.aikido.dev/blog/shai-hulud-strikes-again-hitting-zapier-ensdomains First Social Posts https://www.linkedin.com/posts/advocatemack_zapier-supply-chain-compromise-alert-in-activity-7398643172815421440-egmk <!-- SC_ON --> submitted by /u/Advocatemack (https://www.reddit.com/user/Advocatemack)
[link] (https://www.aikido.dev/blog/shai-hulud-strikes-again-hitting-zapier-ensdomains) [comments] (https://www.reddit.com/r/programming/comments/1p5i31d/sha1hulud_the_second_comming_postman_zapier/)
[link] (https://www.aikido.dev/blog/shai-hulud-strikes-again-hitting-zapier-ensdomains) [comments] (https://www.reddit.com/r/programming/comments/1p5i31d/sha1hulud_the_second_comming_postman_zapier/)
Assert in production
https://www.reddit.com/r/programming/comments/1p5jdqe/assert_in_production/
<!-- SC_OFF -->Why your code should crash more <!-- SC_ON --> submitted by /u/dtornow (https://www.reddit.com/user/dtornow)
[link] (https://dtornow.substack.com/p/assert-in-production) [comments] (https://www.reddit.com/r/programming/comments/1p5jdqe/assert_in_production/)
https://www.reddit.com/r/programming/comments/1p5jdqe/assert_in_production/
<!-- SC_OFF -->Why your code should crash more <!-- SC_ON --> submitted by /u/dtornow (https://www.reddit.com/user/dtornow)
[link] (https://dtornow.substack.com/p/assert-in-production) [comments] (https://www.reddit.com/r/programming/comments/1p5jdqe/assert_in_production/)