Index, Count, Offset, Size
https://www.reddit.com/r/programming/comments/1rato8d/index_count_offset_size/
submitted by /u/matklad (https://www.reddit.com/user/matklad)
[link] (https://tigerbeetle.com/blog/2026-02-16-index-count-offset-size/) [comments] (https://www.reddit.com/r/programming/comments/1rato8d/index_count_offset_size/)
https://www.reddit.com/r/programming/comments/1rato8d/index_count_offset_size/
submitted by /u/matklad (https://www.reddit.com/user/matklad)
[link] (https://tigerbeetle.com/blog/2026-02-16-index-count-offset-size/) [comments] (https://www.reddit.com/r/programming/comments/1rato8d/index_count_offset_size/)
Do you ignore accented words in your django query
https://www.reddit.com/r/programming/comments/1rauh1u/do_you_ignore_accented_words_in_your_django_query/
<!-- SC_OFF -->Did you know that a normal search for "Helen" will usually miss names like "Hélène"? By default, icontains only matches exact characters, so accents or diacritics can make your search feel broken to users. On PostgreSQL, using the unaccent lookup fixes this: Author.objects.filter(nameunaccenticontains="Helen") Now your search finds "Helen", "Helena", and "Hélène", making your app truly international-friendly. Don't forget to include "django.contrib.postgres" in your installed apps and enable UnaccentExtension in django migrations or using SQL (CREATE EXTENSION "unaccent";) <!-- SC_ON --> submitted by /u/natanasrat (https://www.reddit.com/user/natanasrat)
[link] (https://youtu.be/54KsoooS-Og) [comments] (https://www.reddit.com/r/programming/comments/1rauh1u/do_you_ignore_accented_words_in_your_django_query/)
https://www.reddit.com/r/programming/comments/1rauh1u/do_you_ignore_accented_words_in_your_django_query/
<!-- SC_OFF -->Did you know that a normal search for "Helen" will usually miss names like "Hélène"? By default, icontains only matches exact characters, so accents or diacritics can make your search feel broken to users. On PostgreSQL, using the unaccent lookup fixes this: Author.objects.filter(nameunaccenticontains="Helen") Now your search finds "Helen", "Helena", and "Hélène", making your app truly international-friendly. Don't forget to include "django.contrib.postgres" in your installed apps and enable UnaccentExtension in django migrations or using SQL (CREATE EXTENSION "unaccent";) <!-- SC_ON --> submitted by /u/natanasrat (https://www.reddit.com/user/natanasrat)
[link] (https://youtu.be/54KsoooS-Og) [comments] (https://www.reddit.com/r/programming/comments/1rauh1u/do_you_ignore_accented_words_in_your_django_query/)
It's impossible for Rust to have sane HKT
https://www.reddit.com/r/programming/comments/1rbai5s/its_impossible_for_rust_to_have_sane_hkt/
<!-- SC_OFF -->Rust famously can't find a good way to support HKT. This is not a lack-of-effort problem. It's caused by a fundamental flaw where Rust reifies technical propositions on the same level and slot as business logic. When they are all first-class citizens at type level and are indistinguishable, things start to break. <!-- SC_ON --> submitted by /u/vspefs (https://www.reddit.com/user/vspefs)
[link] (https://vspefs.substack.com/p/its-impossible-for-rust-to-have-sane) [comments] (https://www.reddit.com/r/programming/comments/1rbai5s/its_impossible_for_rust_to_have_sane_hkt/)
https://www.reddit.com/r/programming/comments/1rbai5s/its_impossible_for_rust_to_have_sane_hkt/
<!-- SC_OFF -->Rust famously can't find a good way to support HKT. This is not a lack-of-effort problem. It's caused by a fundamental flaw where Rust reifies technical propositions on the same level and slot as business logic. When they are all first-class citizens at type level and are indistinguishable, things start to break. <!-- SC_ON --> submitted by /u/vspefs (https://www.reddit.com/user/vspefs)
[link] (https://vspefs.substack.com/p/its-impossible-for-rust-to-have-sane) [comments] (https://www.reddit.com/r/programming/comments/1rbai5s/its_impossible_for_rust_to_have_sane_hkt/)
Zero-GC and 78M samples/sec: Pushing Node.js 22 to the limit for Stateful DSP
https://www.reddit.com/r/programming/comments/1rbbvh2/zerogc_and_78m_samplessec_pushing_nodejs_22_to/
<!-- SC_OFF -->I’ve been benchmarking a hardware-aware Signal Processing library for Node.js (dspx) and found that with the right architecture, you can effectively bypass the V8 garbage collector. By implementing a zero-copy pipeline, I managed to hit 78 million samples per second on a single vCPU on AWS Lambda (1769MB RAM). Even more interesting is the memory profile: at input sizes between 212 and 220, the system shows zero or negative heap growth, resulting in deterministic p99 latencies that stay flat even under heavy load. I also focused on microsecond-level state serialization to make stateful functions (like Kalman filters) viable on ephemeral runtimes like Lambda. The deployment size is a lean 1.3MB, which keeps cold starts consistently between 170ms and 240ms. It includes a full toolkit from MFCCs and Mel-Spectrograms to adaptive filters and ICA/PCA transforms. Its single threaded by default on both the C++ and JavaScript side, so the user can multi-thread it in JavaScript using worker threads, atomics, and SharedArrayBuffers. Benchmark repository: https://github.com/A-KGeorge/dspx-benchmark Code repository: https://github.com/A-KGeorge/dspx <!-- SC_ON --> submitted by /u/sarcasm4052 (https://www.reddit.com/user/sarcasm4052)
[link] (https://github.com/A-KGeorge/dspx-benchmark/tree/main/charts) [comments] (https://www.reddit.com/r/programming/comments/1rbbvh2/zerogc_and_78m_samplessec_pushing_nodejs_22_to/)
https://www.reddit.com/r/programming/comments/1rbbvh2/zerogc_and_78m_samplessec_pushing_nodejs_22_to/
<!-- SC_OFF -->I’ve been benchmarking a hardware-aware Signal Processing library for Node.js (dspx) and found that with the right architecture, you can effectively bypass the V8 garbage collector. By implementing a zero-copy pipeline, I managed to hit 78 million samples per second on a single vCPU on AWS Lambda (1769MB RAM). Even more interesting is the memory profile: at input sizes between 212 and 220, the system shows zero or negative heap growth, resulting in deterministic p99 latencies that stay flat even under heavy load. I also focused on microsecond-level state serialization to make stateful functions (like Kalman filters) viable on ephemeral runtimes like Lambda. The deployment size is a lean 1.3MB, which keeps cold starts consistently between 170ms and 240ms. It includes a full toolkit from MFCCs and Mel-Spectrograms to adaptive filters and ICA/PCA transforms. Its single threaded by default on both the C++ and JavaScript side, so the user can multi-thread it in JavaScript using worker threads, atomics, and SharedArrayBuffers. Benchmark repository: https://github.com/A-KGeorge/dspx-benchmark Code repository: https://github.com/A-KGeorge/dspx <!-- SC_ON --> submitted by /u/sarcasm4052 (https://www.reddit.com/user/sarcasm4052)
[link] (https://github.com/A-KGeorge/dspx-benchmark/tree/main/charts) [comments] (https://www.reddit.com/r/programming/comments/1rbbvh2/zerogc_and_78m_samplessec_pushing_nodejs_22_to/)
Nice try dear AI. Now let's talk about production.
https://www.reddit.com/r/programming/comments/1rbc23l/nice_try_dear_ai_now_lets_talk_about_production/
<!-- SC_OFF -->Just recently I wanted to write a script that uploads a directory to S3. I decided to use Copilot. I have been using it for a while. This article is an attempt to prove two things: (a) that AI can't (still) replace me as a senior software engineer and (b) that it still makes sense to learn programming and focus on the fundamentals. <!-- SC_ON --> submitted by /u/krasimirtsonev (https://www.reddit.com/user/krasimirtsonev)
[link] (https://krasimirtsonev.com/blog/article/nice-try-dear-ai-now-lets-talk-production) [comments] (https://www.reddit.com/r/programming/comments/1rbc23l/nice_try_dear_ai_now_lets_talk_about_production/)
https://www.reddit.com/r/programming/comments/1rbc23l/nice_try_dear_ai_now_lets_talk_about_production/
<!-- SC_OFF -->Just recently I wanted to write a script that uploads a directory to S3. I decided to use Copilot. I have been using it for a while. This article is an attempt to prove two things: (a) that AI can't (still) replace me as a senior software engineer and (b) that it still makes sense to learn programming and focus on the fundamentals. <!-- SC_ON --> submitted by /u/krasimirtsonev (https://www.reddit.com/user/krasimirtsonev)
[link] (https://krasimirtsonev.com/blog/article/nice-try-dear-ai-now-lets-talk-production) [comments] (https://www.reddit.com/r/programming/comments/1rbc23l/nice_try_dear_ai_now_lets_talk_about_production/)
After a year of using Cursor, Claude Code, Antigravity, and Copilot daily — I think AI tools are making a lot of devs slower, not faster. Here's why.
https://www.reddit.com/r/programming/comments/1rbdl1k/after_a_year_of_using_cursor_claude_code/
<!-- SC_OFF -->I know this is going to be controversial, but hear me out. I've been using AI coding tools heavily for the past year. Cursor Pro, Claude Code (Max), Copilot, Windsurf, and recently Antigravity. I build production apps, not toy projects. And I've come to a conclusion that I don't see discussed enough: A lot of us are slower with AI tools than without them, and we don't realize it because generating code feels fast even when shipping doesn't. Here's what I've noticed: 1. The illusion of velocity AI spits out 200 lines in 8 seconds. You feel productive. Then you spend 40 minutes reading, debugging, and fixing hallucinations. You could've written the 30 lines you actually needed in 10 minutes. I started tracking this and on days I used AI heavily for complex logic, I shipped fewer features than days I used it only for boilerplate and tests. 2. Credit anxiety is real cognitive overhead Ever catch yourself thinking "should I use Sonnet or switch to Gemini to save credits?" or "I've burned 60% of my credits and it's only the 15th"? Cursor's $20 credit pool drains 2.4x faster with Claude vs Gemini. That's ~225 Claude requests vs ~550 Gemini. You're now running a micro-budget alongside your codebase and that mental load is real. 3. The sycophancy trap You write mid code, ask AI to review it, and it says "Great implementation! Clean and well-structured." You move on. Bug ships to production. Remember when OpenAI had to roll back GPT-4o in April 2025 because it was literally praising users for dangerous decisions? That problem hasn't gone away. I now always add "grade this harshly" or "what would a hostile code reviewer find" the difference in feedback quality is night and day. 4. IDE-hopping is killing your productivity All these IDEs use the same models. Cursor, Windsurf, Antigravity, Copilot they all have access to Claude and GPT-5. The differences come from context window management, agent architecture, system prompts, and integration depth. But devs spend weeks switching between them, losing their .cursorrules, their muscle memory, their workflows. You're perpetually a beginner. 5. Delegation requires clarity most of us don't have When you code yourself, vagueness resolves naturally. When you delegate to an AI agent, vagueness compounds. The agent confidently builds the wrong thing across 15 files and now you're debugging code you didn't write and don't fully understand. The devs who benefit most from agent mode were already good at writing specs and decomposing problems. 6. Knowledge atrophy is real If AI writes all your error handling, DB queries, and API integrations do you still understand them? Senior devs with deep fundamentals can review AI output critically. But I'm genuinely worried about junior/mid devs building on foundations they don't understand. When the AI generates a subtle race condition or an N+1 query, you need the knowledge to catch it. 7. Tool sprawl Cursor, Windsurf, Antigravity, Copilot, TRAE, Kiro, Kilo for IDEs. Claude, GPT-5, Gemini, DeepSeek, Mistral, Kimi for models. Then image gen, OCR, automation tools, code review bots... That's not a toolkit, it's a part-time job in subscription management. What actually works (for me): Pick ONE IDE and commit for 3+ months. Stop switching. Configure your rules files (.cursorrules, CLAUDE.md (http://claude.md/), Antigravity Skills). This is the highest-leverage thing you can do. Use AI for boilerplate, tests, docs, and code explanation. Write the hard parts yourself. Fight sycophancy actively. Build "be harsh" instructions into your config files. Set a credit budget and stop checking the dashboard. The mental overhead costs more than the credits. Keep writing code by hand. The moment you can't code without
https://www.reddit.com/r/programming/comments/1rbdl1k/after_a_year_of_using_cursor_claude_code/
<!-- SC_OFF -->I know this is going to be controversial, but hear me out. I've been using AI coding tools heavily for the past year. Cursor Pro, Claude Code (Max), Copilot, Windsurf, and recently Antigravity. I build production apps, not toy projects. And I've come to a conclusion that I don't see discussed enough: A lot of us are slower with AI tools than without them, and we don't realize it because generating code feels fast even when shipping doesn't. Here's what I've noticed: 1. The illusion of velocity AI spits out 200 lines in 8 seconds. You feel productive. Then you spend 40 minutes reading, debugging, and fixing hallucinations. You could've written the 30 lines you actually needed in 10 minutes. I started tracking this and on days I used AI heavily for complex logic, I shipped fewer features than days I used it only for boilerplate and tests. 2. Credit anxiety is real cognitive overhead Ever catch yourself thinking "should I use Sonnet or switch to Gemini to save credits?" or "I've burned 60% of my credits and it's only the 15th"? Cursor's $20 credit pool drains 2.4x faster with Claude vs Gemini. That's ~225 Claude requests vs ~550 Gemini. You're now running a micro-budget alongside your codebase and that mental load is real. 3. The sycophancy trap You write mid code, ask AI to review it, and it says "Great implementation! Clean and well-structured." You move on. Bug ships to production. Remember when OpenAI had to roll back GPT-4o in April 2025 because it was literally praising users for dangerous decisions? That problem hasn't gone away. I now always add "grade this harshly" or "what would a hostile code reviewer find" the difference in feedback quality is night and day. 4. IDE-hopping is killing your productivity All these IDEs use the same models. Cursor, Windsurf, Antigravity, Copilot they all have access to Claude and GPT-5. The differences come from context window management, agent architecture, system prompts, and integration depth. But devs spend weeks switching between them, losing their .cursorrules, their muscle memory, their workflows. You're perpetually a beginner. 5. Delegation requires clarity most of us don't have When you code yourself, vagueness resolves naturally. When you delegate to an AI agent, vagueness compounds. The agent confidently builds the wrong thing across 15 files and now you're debugging code you didn't write and don't fully understand. The devs who benefit most from agent mode were already good at writing specs and decomposing problems. 6. Knowledge atrophy is real If AI writes all your error handling, DB queries, and API integrations do you still understand them? Senior devs with deep fundamentals can review AI output critically. But I'm genuinely worried about junior/mid devs building on foundations they don't understand. When the AI generates a subtle race condition or an N+1 query, you need the knowledge to catch it. 7. Tool sprawl Cursor, Windsurf, Antigravity, Copilot, TRAE, Kiro, Kilo for IDEs. Claude, GPT-5, Gemini, DeepSeek, Mistral, Kimi for models. Then image gen, OCR, automation tools, code review bots... That's not a toolkit, it's a part-time job in subscription management. What actually works (for me): Pick ONE IDE and commit for 3+ months. Stop switching. Configure your rules files (.cursorrules, CLAUDE.md (http://claude.md/), Antigravity Skills). This is the highest-leverage thing you can do. Use AI for boilerplate, tests, docs, and code explanation. Write the hard parts yourself. Fight sycophancy actively. Build "be harsh" instructions into your config files. Set a credit budget and stop checking the dashboard. The mental overhead costs more than the credits. Keep writing code by hand. The moment you can't code without
AI is the moment it's making you slower. TL;DR: AI coding tools are incredible, but generating code fast ≠ shipping fast. Most devs are in the "impressed by the chainsaw but haven't learned technique" phase. Depth with one tool > breadth across eight. Fight sycophancy. Write the hard parts yourself. Curious if others are experiencing similar things or if I'm just doing it wrong. What's your honest take? <!-- SC_ON --> submitted by /u/riturajpokhriyal (https://www.reddit.com/user/riturajpokhriyal)
[link] (https://medium.com/@riturajpokhriyal/why-ai-coding-tools-are-making-you-slower-and-what-actually-works-c18f432e470b?sk=72b292bd80effdb7ddb2eb956ae6a940) [comments] (https://www.reddit.com/r/programming/comments/1rbdl1k/after_a_year_of_using_cursor_claude_code/)
[link] (https://medium.com/@riturajpokhriyal/why-ai-coding-tools-are-making-you-slower-and-what-actually-works-c18f432e470b?sk=72b292bd80effdb7ddb2eb956ae6a940) [comments] (https://www.reddit.com/r/programming/comments/1rbdl1k/after_a_year_of_using_cursor_claude_code/)
Linux 7.0 Makes Preparations For Rust 1.95
https://www.reddit.com/r/programming/comments/1rbgk2f/linux_70_makes_preparations_for_rust_195/
submitted by /u/BlueGoliath (https://www.reddit.com/user/BlueGoliath)
[link] (https://archive.is/GmeOi) [comments] (https://www.reddit.com/r/programming/comments/1rbgk2f/linux_70_makes_preparations_for_rust_195/)
https://www.reddit.com/r/programming/comments/1rbgk2f/linux_70_makes_preparations_for_rust_195/
submitted by /u/BlueGoliath (https://www.reddit.com/user/BlueGoliath)
[link] (https://archive.is/GmeOi) [comments] (https://www.reddit.com/r/programming/comments/1rbgk2f/linux_70_makes_preparations_for_rust_195/)
You are not left behind
https://www.reddit.com/r/programming/comments/1rbhfvz/you_are_not_left_behind/
<!-- SC_OFF -->Good take on the evolving maturity of new software development tools in the context of current LLMs & agents hype. The conclusion: often it's wiser to wait and let tools actually mature (if they will, it's not always they case) before deciding on wider adoption & considerable time and energy investment. <!-- SC_ON --> submitted by /u/BinaryIgor (https://www.reddit.com/user/BinaryIgor)
[link] (https://www.ufried.com/blog/not_left_behind/) [comments] (https://www.reddit.com/r/programming/comments/1rbhfvz/you_are_not_left_behind/)
https://www.reddit.com/r/programming/comments/1rbhfvz/you_are_not_left_behind/
<!-- SC_OFF -->Good take on the evolving maturity of new software development tools in the context of current LLMs & agents hype. The conclusion: often it's wiser to wait and let tools actually mature (if they will, it's not always they case) before deciding on wider adoption & considerable time and energy investment. <!-- SC_ON --> submitted by /u/BinaryIgor (https://www.reddit.com/user/BinaryIgor)
[link] (https://www.ufried.com/blog/not_left_behind/) [comments] (https://www.reddit.com/r/programming/comments/1rbhfvz/you_are_not_left_behind/)
Does Syntax Matter?
https://www.reddit.com/r/programming/comments/1rbhqgg/does_syntax_matter/
submitted by /u/gingerbill (https://www.reddit.com/user/gingerbill)
[link] (https://www.gingerbill.org/article/2026/02/21/does-syntax-matter/) [comments] (https://www.reddit.com/r/programming/comments/1rbhqgg/does_syntax_matter/)
https://www.reddit.com/r/programming/comments/1rbhqgg/does_syntax_matter/
submitted by /u/gingerbill (https://www.reddit.com/user/gingerbill)
[link] (https://www.gingerbill.org/article/2026/02/21/does-syntax-matter/) [comments] (https://www.reddit.com/r/programming/comments/1rbhqgg/does_syntax_matter/)
Oop design pattern
https://www.reddit.com/r/programming/comments/1rbj6lk/oop_design_pattern/
<!-- SC_OFF -->I’ve decided to learn in public. Ever wondered what “Program to an interface, not implementation” actually means? I break it down clearly in this Strategy Pattern video <!-- SC_ON --> submitted by /u/Big-Conflict-2600 (https://www.reddit.com/user/Big-Conflict-2600)
[link] (https://youtu.be/7xzI_ReANN4?si=9iyMNtTPMa3YgqY2) [comments] (https://www.reddit.com/r/programming/comments/1rbj6lk/oop_design_pattern/)
https://www.reddit.com/r/programming/comments/1rbj6lk/oop_design_pattern/
<!-- SC_OFF -->I’ve decided to learn in public. Ever wondered what “Program to an interface, not implementation” actually means? I break it down clearly in this Strategy Pattern video <!-- SC_ON --> submitted by /u/Big-Conflict-2600 (https://www.reddit.com/user/Big-Conflict-2600)
[link] (https://youtu.be/7xzI_ReANN4?si=9iyMNtTPMa3YgqY2) [comments] (https://www.reddit.com/r/programming/comments/1rbj6lk/oop_design_pattern/)
Sampling Strategies Beyond Head and Tail-based Sampling
https://www.reddit.com/r/programming/comments/1rbll3f/sampling_strategies_beyond_head_and_tailbased/
<!-- SC_OFF -->A blog on the sampling strategies that go beyond the conventional techniques of head or tail-based sampling. <!-- SC_ON --> submitted by /u/elizObserves (https://www.reddit.com/user/elizObserves)
[link] (https://newsletter.signoz.io/p/saving-money-with-sampling-strategies) [comments] (https://www.reddit.com/r/programming/comments/1rbll3f/sampling_strategies_beyond_head_and_tailbased/)
https://www.reddit.com/r/programming/comments/1rbll3f/sampling_strategies_beyond_head_and_tailbased/
<!-- SC_OFF -->A blog on the sampling strategies that go beyond the conventional techniques of head or tail-based sampling. <!-- SC_ON --> submitted by /u/elizObserves (https://www.reddit.com/user/elizObserves)
[link] (https://newsletter.signoz.io/p/saving-money-with-sampling-strategies) [comments] (https://www.reddit.com/r/programming/comments/1rbll3f/sampling_strategies_beyond_head_and_tailbased/)
Unicode's confusables.txt and NFKC normalization disagree on 31 characters
https://www.reddit.com/r/programming/comments/1rbm18a/unicodes_confusablestxt_and_nfkc_normalization/
submitted by /u/paultendo (https://www.reddit.com/user/paultendo)
[link] (https://paultendo.github.io/posts/unicode-confusables-nfkc-conflict/) [comments] (https://www.reddit.com/r/programming/comments/1rbm18a/unicodes_confusablestxt_and_nfkc_normalization/)
https://www.reddit.com/r/programming/comments/1rbm18a/unicodes_confusablestxt_and_nfkc_normalization/
submitted by /u/paultendo (https://www.reddit.com/user/paultendo)
[link] (https://paultendo.github.io/posts/unicode-confusables-nfkc-conflict/) [comments] (https://www.reddit.com/r/programming/comments/1rbm18a/unicodes_confusablestxt_and_nfkc_normalization/)
A program that outputs a zip, containing a program that outputs a zip, containing a program...
https://www.reddit.com/r/programming/comments/1rbsilp/a_program_that_outputs_a_zip_containing_a_program/
<!-- SC_OFF -->[Source code on Github](https://github.com/donno2048/zip-quine) In a former post, I explained the tricks I discovered that allowed me to create a snake game whose every frame is code for a snake game. A big problem I faced was cross-compiling as that would mean the output would have to support both operating systems, so it would be very large and would be hard to fit in the terminal. The trick I found was treating the original program as a generator that way the generated programs can be not self-similar to the generator but only to themselves. Then I realised I could use the same tactic and abuse it much further to produce the program in the video. The generator is not very complex because of this method but almost all of the code is macros which makes the payload (pre-preprocessing) very small which I quite like, but as a side effect now the ratio between the quines payload size and the pre-preprocessed payload is absurd. Another small gain was achieved by making macros for constant string both in string and in char array versions, that way we can easily both add them directly to the payload and use them in the code without needing to do complex formatting later to make the code appear in the preprocessed playload which I'm very happy about because it seems like (together with the S(x) X(x) method I described in the former post) as the biggest breakthrough that could lead to a general purpose quine. I couldn't force gcc to let me create n copies of char formatting string so I used very ugly trickery with `#define f4 "%c%c%c%c" #define f3 "%c%c%c" #define f10 f3 f3 f4` and used those three macros... Maybe there's a way to tell sprintf to put the next n arguments as chars that I don't know about... Another trick I thought of is tricking the fmt to format without null chars so that I could do pointer searching and arithmetic without saving the size of the buffer, then fmt-ing again correctly. The last trick was a very clibrated use of a `run` macro used to initiate the payload and to run the program to generate the quine and to format the payload, it's hard to explain the details without showing the code, so if it sounds interesting I suggest you read the `run` macro and the two uses (there's one that's easy to miss in the S() or the payload). The rest was basically reading about the ZIP file format to be able to even do this. <!-- SC_ON --> submitted by /u/Perfect-Highlight964 (https://www.reddit.com/user/Perfect-Highlight964)
[link] (https://youtu.be/sIdGe2xg9Qw?si=lD8_FEv4drKmbXwZ) [comments] (https://www.reddit.com/r/programming/comments/1rbsilp/a_program_that_outputs_a_zip_containing_a_program/)
https://www.reddit.com/r/programming/comments/1rbsilp/a_program_that_outputs_a_zip_containing_a_program/
<!-- SC_OFF -->[Source code on Github](https://github.com/donno2048/zip-quine) In a former post, I explained the tricks I discovered that allowed me to create a snake game whose every frame is code for a snake game. A big problem I faced was cross-compiling as that would mean the output would have to support both operating systems, so it would be very large and would be hard to fit in the terminal. The trick I found was treating the original program as a generator that way the generated programs can be not self-similar to the generator but only to themselves. Then I realised I could use the same tactic and abuse it much further to produce the program in the video. The generator is not very complex because of this method but almost all of the code is macros which makes the payload (pre-preprocessing) very small which I quite like, but as a side effect now the ratio between the quines payload size and the pre-preprocessed payload is absurd. Another small gain was achieved by making macros for constant string both in string and in char array versions, that way we can easily both add them directly to the payload and use them in the code without needing to do complex formatting later to make the code appear in the preprocessed playload which I'm very happy about because it seems like (together with the S(x) X(x) method I described in the former post) as the biggest breakthrough that could lead to a general purpose quine. I couldn't force gcc to let me create n copies of char formatting string so I used very ugly trickery with `#define f4 "%c%c%c%c" #define f3 "%c%c%c" #define f10 f3 f3 f4` and used those three macros... Maybe there's a way to tell sprintf to put the next n arguments as chars that I don't know about... Another trick I thought of is tricking the fmt to format without null chars so that I could do pointer searching and arithmetic without saving the size of the buffer, then fmt-ing again correctly. The last trick was a very clibrated use of a `run` macro used to initiate the payload and to run the program to generate the quine and to format the payload, it's hard to explain the details without showing the code, so if it sounds interesting I suggest you read the `run` macro and the two uses (there's one that's easy to miss in the S() or the payload). The rest was basically reading about the ZIP file format to be able to even do this. <!-- SC_ON --> submitted by /u/Perfect-Highlight964 (https://www.reddit.com/user/Perfect-Highlight964)
[link] (https://youtu.be/sIdGe2xg9Qw?si=lD8_FEv4drKmbXwZ) [comments] (https://www.reddit.com/r/programming/comments/1rbsilp/a_program_that_outputs_a_zip_containing_a_program/)
TLS handshake step-by-step — interactive HTTPS breakdown
https://www.reddit.com/r/programming/comments/1rbtq2y/tls_handshake_stepbystep_interactive_https/
submitted by /u/nulless (https://www.reddit.com/user/nulless)
[link] (https://toolkit.whysonil.dev/how-it-works/https) [comments] (https://www.reddit.com/r/programming/comments/1rbtq2y/tls_handshake_stepbystep_interactive_https/)
https://www.reddit.com/r/programming/comments/1rbtq2y/tls_handshake_stepbystep_interactive_https/
submitted by /u/nulless (https://www.reddit.com/user/nulless)
[link] (https://toolkit.whysonil.dev/how-it-works/https) [comments] (https://www.reddit.com/r/programming/comments/1rbtq2y/tls_handshake_stepbystep_interactive_https/)
Kovan: wait-free memory reclamation for Rust, TLA+ verified, no_std, with wait-free concurrent data structures built on top
https://www.reddit.com/r/programming/comments/1rbw95s/kovan_waitfree_memory_reclamation_for_rust_tla/
submitted by /u/vertexclique (https://www.reddit.com/user/vertexclique)
[link] (https://vertexclique.com/blog/kovan-from-prod-to-mr/) [comments] (https://www.reddit.com/r/programming/comments/1rbw95s/kovan_waitfree_memory_reclamation_for_rust_tla/)
https://www.reddit.com/r/programming/comments/1rbw95s/kovan_waitfree_memory_reclamation_for_rust_tla/
submitted by /u/vertexclique (https://www.reddit.com/user/vertexclique)
[link] (https://vertexclique.com/blog/kovan-from-prod-to-mr/) [comments] (https://www.reddit.com/r/programming/comments/1rbw95s/kovan_waitfree_memory_reclamation_for_rust_tla/)
How we reclaim agency in democracy with tech: Mirror Parliament
https://www.reddit.com/r/programming/comments/1rbxe95/how_we_reclaim_agency_in_democracy_with_tech/
submitted by /u/AirlineGlass5010 (https://www.reddit.com/user/AirlineGlass5010)
[link] (https://lustra.news/info/blueprint/) [comments] (https://www.reddit.com/r/programming/comments/1rbxe95/how_we_reclaim_agency_in_democracy_with_tech/)
https://www.reddit.com/r/programming/comments/1rbxe95/how_we_reclaim_agency_in_democracy_with_tech/
submitted by /u/AirlineGlass5010 (https://www.reddit.com/user/AirlineGlass5010)
[link] (https://lustra.news/info/blueprint/) [comments] (https://www.reddit.com/r/programming/comments/1rbxe95/how_we_reclaim_agency_in_democracy_with_tech/)
Dictionary Compression is finally here, and it's ridiculously good
https://www.reddit.com/r/programming/comments/1rcfofi/dictionary_compression_is_finally_here_and_its/
submitted by /u/pimterry (https://www.reddit.com/user/pimterry)
[link] (https://httptoolkit.com/blog/dictionary-compression-performance-zstd-brotli/?utm_source=newsletter&utm_medium=email&utm_campaign=blog-post-dictionary-compression-is-finally-here-and-its-ridiculously-good) [comments] (https://www.reddit.com/r/programming/comments/1rcfofi/dictionary_compression_is_finally_here_and_its/)
https://www.reddit.com/r/programming/comments/1rcfofi/dictionary_compression_is_finally_here_and_its/
submitted by /u/pimterry (https://www.reddit.com/user/pimterry)
[link] (https://httptoolkit.com/blog/dictionary-compression-performance-zstd-brotli/?utm_source=newsletter&utm_medium=email&utm_campaign=blog-post-dictionary-compression-is-finally-here-and-its-ridiculously-good) [comments] (https://www.reddit.com/r/programming/comments/1rcfofi/dictionary_compression_is_finally_here_and_its/)
Code isn’t what’s slowing projects down
https://www.reddit.com/r/programming/comments/1rcj41t/code_isnt_whats_slowing_projects_down/
<!-- SC_OFF -->After a bunch of years doing this I’m starting to think we blame code way too fast when something slips. Every delay turns into a tech conversation: architecture, debt, refactor, rewrite. But most of the time the code was… fine. What actually hurt was people not being aligned. Decisions made but not written down, teams assuming slightly different things, priorities shifting. Ownership kind of existing but not really. Then we add more process which mostly just adds noise. Technical debt is easy to point at, communication issues aren’t. Maybe I’m wrong, I don't know. Longer writeup here if anyone cares: https://shiftmag.dev/code-isnt-slowing-your-project-down-communication-is-7889/ <!-- SC_ON --> submitted by /u/ArghAy (https://www.reddit.com/user/ArghAy)
[link] (https://shiftmag.dev/code-isnt-slowing-your-project-down-communication-is-7889/) [comments] (https://www.reddit.com/r/programming/comments/1rcj41t/code_isnt_whats_slowing_projects_down/)
https://www.reddit.com/r/programming/comments/1rcj41t/code_isnt_whats_slowing_projects_down/
<!-- SC_OFF -->After a bunch of years doing this I’m starting to think we blame code way too fast when something slips. Every delay turns into a tech conversation: architecture, debt, refactor, rewrite. But most of the time the code was… fine. What actually hurt was people not being aligned. Decisions made but not written down, teams assuming slightly different things, priorities shifting. Ownership kind of existing but not really. Then we add more process which mostly just adds noise. Technical debt is easy to point at, communication issues aren’t. Maybe I’m wrong, I don't know. Longer writeup here if anyone cares: https://shiftmag.dev/code-isnt-slowing-your-project-down-communication-is-7889/ <!-- SC_ON --> submitted by /u/ArghAy (https://www.reddit.com/user/ArghAy)
[link] (https://shiftmag.dev/code-isnt-slowing-your-project-down-communication-is-7889/) [comments] (https://www.reddit.com/r/programming/comments/1rcj41t/code_isnt_whats_slowing_projects_down/)
You don't need free lists
https://www.reddit.com/r/programming/comments/1rcpfgq/you_dont_need_free_lists/
submitted by /u/ketralnis (https://www.reddit.com/user/ketralnis)
[link] (https://jakubtomsu.github.io/posts/bit_pools/) [comments] (https://www.reddit.com/r/programming/comments/1rcpfgq/you_dont_need_free_lists/)
https://www.reddit.com/r/programming/comments/1rcpfgq/you_dont_need_free_lists/
submitted by /u/ketralnis (https://www.reddit.com/user/ketralnis)
[link] (https://jakubtomsu.github.io/posts/bit_pools/) [comments] (https://www.reddit.com/r/programming/comments/1rcpfgq/you_dont_need_free_lists/)
Using Haskell's 'newtype' in C
https://www.reddit.com/r/programming/comments/1rcpgfb/using_haskells_newtype_in_c/
submitted by /u/ketralnis (https://www.reddit.com/user/ketralnis)
[link] (https://blog.nelhage.com/2010/10/using-haskells-newtype-in-c/) [comments] (https://www.reddit.com/r/programming/comments/1rcpgfb/using_haskells_newtype_in_c/)
https://www.reddit.com/r/programming/comments/1rcpgfb/using_haskells_newtype_in_c/
submitted by /u/ketralnis (https://www.reddit.com/user/ketralnis)
[link] (https://blog.nelhage.com/2010/10/using-haskells-newtype-in-c/) [comments] (https://www.reddit.com/r/programming/comments/1rcpgfb/using_haskells_newtype_in_c/)