Reddit Programming
201 subscribers
1.22K photos
126K links
I will send you newest post from subreddit /r/programming
Download Telegram
Creator of Claude Code: "Coding is solved"
https://www.reddit.com/r/programming/comments/1rakdst/creator_of_claude_code_coding_is_solved/

<!-- SC_OFF -->Boris Cherny is the creator of Claude Code(a cli agent written in React. This is not a joke) and the responsible for the following repo that has more than 5k issues: https://github.com/anthropics/claude-code/issues Since coding is solved, I wonder why they don't just use Claude Code to investigate and solve all the issues in the Claude Code repo as soon as they pop up? Heck, I wonder why there are any issues at all if coding is solved? Who or what is making all the new bugs, gremlins? <!-- SC_ON --> submitted by /u/Gil_berth (https://www.reddit.com/user/Gil_berth)
[link] (https://www.lennysnewsletter.com/p/head-of-claude-code-what-happens) [comments] (https://www.reddit.com/r/programming/comments/1rakdst/creator_of_claude_code_coding_is_solved/)
Don’t make the mistake of evaluating multiple counts that involve joins without using distinct=True.
https://www.reddit.com/r/programming/comments/1ramrtr/dont_make_the_mistake_of_evaluating_multiple/

<!-- SC_OFF -->Please, Django devs!! Don’t make the mistake of evaluating multiple counts that involve joins without using distinct=True.
If you count both the authors and stores for a book (2 authors and 3 stores) in a single query, Django reports 6 authors and 6 stores instead of 2 & 3!! <!-- SC_ON --> submitted by /u/natanasrat (https://www.reddit.com/user/natanasrat)
[link] (https://youtu.be/wNXSPSB0jdk) [comments] (https://www.reddit.com/r/programming/comments/1ramrtr/dont_make_the_mistake_of_evaluating_multiple/)
Do you ignore accented words in your django query
https://www.reddit.com/r/programming/comments/1rauh1u/do_you_ignore_accented_words_in_your_django_query/

<!-- SC_OFF -->Did you know that a normal search for "Helen" will usually miss names like "Hélène"? By default, icontains only matches exact characters, so accents or diacritics can make your search feel broken to users. On PostgreSQL, using the unaccent lookup fixes this: Author.objects.filter(nameunaccenticontains="Helen") Now your search finds "Helen", "Helena", and "Hélène", making your app truly international-friendly. Don't forget to include "django.contrib.postgres" in your installed apps and enable UnaccentExtension in django migrations or using SQL (CREATE EXTENSION "unaccent";) <!-- SC_ON --> submitted by /u/natanasrat (https://www.reddit.com/user/natanasrat)
[link] (https://youtu.be/54KsoooS-Og) [comments] (https://www.reddit.com/r/programming/comments/1rauh1u/do_you_ignore_accented_words_in_your_django_query/)
It's impossible for Rust to have sane HKT
https://www.reddit.com/r/programming/comments/1rbai5s/its_impossible_for_rust_to_have_sane_hkt/

<!-- SC_OFF -->Rust famously can't find a good way to support HKT. This is not a lack-of-effort problem. It's caused by a fundamental flaw where Rust reifies technical propositions on the same level and slot as business logic. When they are all first-class citizens at type level and are indistinguishable, things start to break. <!-- SC_ON --> submitted by /u/vspefs (https://www.reddit.com/user/vspefs)
[link] (https://vspefs.substack.com/p/its-impossible-for-rust-to-have-sane) [comments] (https://www.reddit.com/r/programming/comments/1rbai5s/its_impossible_for_rust_to_have_sane_hkt/)
Zero-GC and 78M samples/sec: Pushing Node.js 22 to the limit for Stateful DSP
https://www.reddit.com/r/programming/comments/1rbbvh2/zerogc_and_78m_samplessec_pushing_nodejs_22_to/

<!-- SC_OFF -->I’ve been benchmarking a hardware-aware Signal Processing library for Node.js (dspx) and found that with the right architecture, you can effectively bypass the V8 garbage collector. By implementing a zero-copy pipeline, I managed to hit 78 million samples per second on a single vCPU on AWS Lambda (1769MB RAM). Even more interesting is the memory profile: at input sizes between 212 and 220, the system shows zero or negative heap growth, resulting in deterministic p99 latencies that stay flat even under heavy load. I also focused on microsecond-level state serialization to make stateful functions (like Kalman filters) viable on ephemeral runtimes like Lambda. The deployment size is a lean 1.3MB, which keeps cold starts consistently between 170ms and 240ms. It includes a full toolkit from MFCCs and Mel-Spectrograms to adaptive filters and ICA/PCA transforms. Its single threaded by default on both the C++ and JavaScript side, so the user can multi-thread it in JavaScript using worker threads, atomics, and SharedArrayBuffers. Benchmark repository: https://github.com/A-KGeorge/dspx-benchmark Code repository: https://github.com/A-KGeorge/dspx <!-- SC_ON --> submitted by /u/sarcasm4052 (https://www.reddit.com/user/sarcasm4052)
[link] (https://github.com/A-KGeorge/dspx-benchmark/tree/main/charts) [comments] (https://www.reddit.com/r/programming/comments/1rbbvh2/zerogc_and_78m_samplessec_pushing_nodejs_22_to/)
Nice try dear AI. Now let's talk about production.
https://www.reddit.com/r/programming/comments/1rbc23l/nice_try_dear_ai_now_lets_talk_about_production/

<!-- SC_OFF -->Just recently I wanted to write a script that uploads a directory to S3. I decided to use Copilot. I have been using it for a while. This article is an attempt to prove two things: (a) that AI can't (still) replace me as a senior software engineer and (b) that it still makes sense to learn programming and focus on the fundamentals. <!-- SC_ON --> submitted by /u/krasimirtsonev (https://www.reddit.com/user/krasimirtsonev)
[link] (https://krasimirtsonev.com/blog/article/nice-try-dear-ai-now-lets-talk-production) [comments] (https://www.reddit.com/r/programming/comments/1rbc23l/nice_try_dear_ai_now_lets_talk_about_production/)
After a year of using Cursor, Claude Code, Antigravity, and Copilot daily — I think AI tools are making a lot of devs slower, not faster. Here's why.
https://www.reddit.com/r/programming/comments/1rbdl1k/after_a_year_of_using_cursor_claude_code/

<!-- SC_OFF -->I know this is going to be controversial, but hear me out. I've been using AI coding tools heavily for the past year. Cursor Pro, Claude Code (Max), Copilot, Windsurf, and recently Antigravity. I build production apps, not toy projects. And I've come to a conclusion that I don't see discussed enough: A lot of us are slower with AI tools than without them, and we don't realize it because generating code feels fast even when shipping doesn't. Here's what I've noticed: 1. The illusion of velocity AI spits out 200 lines in 8 seconds. You feel productive. Then you spend 40 minutes reading, debugging, and fixing hallucinations. You could've written the 30 lines you actually needed in 10 minutes. I started tracking this and on days I used AI heavily for complex logic, I shipped fewer features than days I used it only for boilerplate and tests. 2. Credit anxiety is real cognitive overhead Ever catch yourself thinking "should I use Sonnet or switch to Gemini to save credits?" or "I've burned 60% of my credits and it's only the 15th"? Cursor's $20 credit pool drains 2.4x faster with Claude vs Gemini. That's ~225 Claude requests vs ~550 Gemini. You're now running a micro-budget alongside your codebase and that mental load is real. 3. The sycophancy trap You write mid code, ask AI to review it, and it says "Great implementation! Clean and well-structured." You move on. Bug ships to production. Remember when OpenAI had to roll back GPT-4o in April 2025 because it was literally praising users for dangerous decisions? That problem hasn't gone away. I now always add "grade this harshly" or "what would a hostile code reviewer find" the difference in feedback quality is night and day. 4. IDE-hopping is killing your productivity All these IDEs use the same models. Cursor, Windsurf, Antigravity, Copilot they all have access to Claude and GPT-5. The differences come from context window management, agent architecture, system prompts, and integration depth. But devs spend weeks switching between them, losing their .cursorrules, their muscle memory, their workflows. You're perpetually a beginner. 5. Delegation requires clarity most of us don't have When you code yourself, vagueness resolves naturally. When you delegate to an AI agent, vagueness compounds. The agent confidently builds the wrong thing across 15 files and now you're debugging code you didn't write and don't fully understand. The devs who benefit most from agent mode were already good at writing specs and decomposing problems. 6. Knowledge atrophy is real If AI writes all your error handling, DB queries, and API integrations do you still understand them? Senior devs with deep fundamentals can review AI output critically. But I'm genuinely worried about junior/mid devs building on foundations they don't understand. When the AI generates a subtle race condition or an N+1 query, you need the knowledge to catch it. 7. Tool sprawl Cursor, Windsurf, Antigravity, Copilot, TRAE, Kiro, Kilo for IDEs. Claude, GPT-5, Gemini, DeepSeek, Mistral, Kimi for models. Then image gen, OCR, automation tools, code review bots... That's not a toolkit, it's a part-time job in subscription management. What actually works (for me): Pick ONE IDE and commit for 3+ months. Stop switching. Configure your rules files (.cursorrules, CLAUDE.md (http://claude.md/), Antigravity Skills). This is the highest-leverage thing you can do. Use AI for boilerplate, tests, docs, and code explanation. Write the hard parts yourself. Fight sycophancy actively. Build "be harsh" instructions into your config files. Set a credit budget and stop checking the dashboard. The mental overhead costs more than the credits. Keep writing code by hand. The moment you can't code without
AI is the moment it's making you slower. TL;DR: AI coding tools are incredible, but generating code fast ≠ shipping fast. Most devs are in the "impressed by the chainsaw but haven't learned technique" phase. Depth with one tool > breadth across eight. Fight sycophancy. Write the hard parts yourself. Curious if others are experiencing similar things or if I'm just doing it wrong. What's your honest take? <!-- SC_ON --> submitted by /u/riturajpokhriyal (https://www.reddit.com/user/riturajpokhriyal)
[link] (https://medium.com/@riturajpokhriyal/why-ai-coding-tools-are-making-you-slower-and-what-actually-works-c18f432e470b?sk=72b292bd80effdb7ddb2eb956ae6a940) [comments] (https://www.reddit.com/r/programming/comments/1rbdl1k/after_a_year_of_using_cursor_claude_code/)