Reddit Programming
210 subscribers
1.22K photos
124K links
I will send you newest post from subreddit /r/programming
Download Telegram
GlobalCVE — Unified CVE Feed for Developers & Security Tools
https://www.reddit.com/r/programming/comments/1oh4ge4/globalcve_unified_cve_feed_for_developers/

<!-- SC_OFF -->For devs building or maintaining security-aware software, GlobalCVE.xyz aggregates CVE data from multiple global sources (NVD, MITRE, CNNVD, etc.) into one clean feed. It’s open-source GitHub.com/GlobalCVE , API-ready, and designed to make vulnerability tracking less fragmented. Useful if you’re integrating CVE checks into CI/CD, writing scanners, or just want better visibility. <!-- SC_ON --> submitted by /u/reallylonguserthing (https://www.reddit.com/user/reallylonguserthing)
[link] (http://globalcve.xyz/) [comments] (https://www.reddit.com/r/programming/comments/1oh4ge4/globalcve_unified_cve_feed_for_developers/)
OpenAI Atlas "Agent Mode" Just Made ARIA Tags the Most Important Thing on Your Roadmap
https://www.reddit.com/r/programming/comments/1oh8lug/openai_atlas_agent_mode_just_made_aria_tags_the/

<!-- SC_OFF -->I've been analyzing the new OpenAI Atlas browser, and most people are missing the biggest takeaway for developers. So I spent time digging into the technical architecture for an article I was writing, and the reality is way more complex. This isn't a browser; it's an agent platform. Article (https://medium.com/ai-advances/openai-atlas-beyond-the-browser-window-unpacking-agentic-web-ai-memories-the-future-of-7a1900fe0999?sk=f86d5cadb904bb8aae15458cfcc71e72) The two things that matter are: "Browser Memories": It's an optional-in feature that builds a personal, queryable knowledge graph of what you see. You can ask it, "Find that article I read last week about Python and summarize the main point." It's a persistent, long-term memory for your AI. "Agent Mode": This is the part that's both amazing and terrifying. It's an AI that can actually click buttons and fill out forms on your behalf. It's not a dumb script; it's using the LLM to understand the page's intent. The crazy part is the security. OpenAI openly admits this is vulnerable to "indirect prompt injection" (i.e., a malicious prompt hidden on a webpage that your agent reads). We all know about "Agent Mode" the feature that lets the AI autonomously navigate websites, fill forms, and click buttons. But how does it know what to click? It's not just using brittle selectors. It's using the LLM to semantically understand the DOM. And the single best way to give it unambiguous instructions? ARIA tags. That you styled to look like a button? The agent might get confused. But a ? That's a direct, machine-readable instruction. Accessibility has always been important, but I'd argue it's now mission-critical for "Agent-SEO." We're about to see a whole new discipline of optimizing sites for AI agents, and it starts with proper semantic HTML and ARIA. I wrote a deeper guide on this, including the massive security flaw (indirect prompt injection) that this all introduces. If you build for the web, this is going to affect you. link (https://medium.com/ai-advances/openai-atlas-beyond-the-browser-window-unpacking-agentic-web-ai-memories-the-future-of-7a1900fe0999?sk=f86d5cadb904bb8aae15458cfcc71e72) <!-- SC_ON --> submitted by /u/Paper-Superb (https://www.reddit.com/user/Paper-Superb)
[link] (https://medium.com/ai-advances/openai-atlas-beyond-the-browser-window-unpacking-agentic-web-ai-memories-the-future-of-7a1900fe0999?sk=f86d5cadb904bb8aae15458cfcc71e72) [comments] (https://www.reddit.com/r/programming/comments/1oh8lug/openai_atlas_agent_mode_just_made_aria_tags_the/)
Extremely fast data compression library
https://www.reddit.com/r/programming/comments/1oha4zd/extremely_fast_data_compression_library/

<!-- SC_OFF -->I needed a compression library for fast in-memory compression, but none were fast enough. So I had to create my own: memlz It beats LZ4 in both compression and decompression speed by multiple times, but of course trades for worse compression ratio. <!-- SC_ON --> submitted by /u/South_Acadia_6368 (https://www.reddit.com/user/South_Acadia_6368)
[link] (https://github.com/rrrlasse/memlz) [comments] (https://www.reddit.com/r/programming/comments/1oha4zd/extremely_fast_data_compression_library/)
How to test and replace any missing translations with i18next
https://www.reddit.com/r/programming/comments/1oi5f42/how_to_test_and_replace_any_missing_translations/

<!-- SC_OFF -->I recently found a really practical way to detect and fill missing translations when working with i18next and honestly, it saves a ton of time when you have dozens of JSON files to maintain. Step 1 — Test for missing translations You can now automatically check if you’re missing any keys in your localization files. It works with your CLI, CI/CD pipelines, or even your Jest/Vitest test suite. Example: npx intlayer test:i18next It scans your codebase, compares it to your JSON files, and outputs which keys are missing or unused. Super handy before deploying or merging a PR. Step 2 — Automatically fill missing translations You can choose your AI provider (ChatGPT, Claude, DeepSeek, or Mistral) and use your own API key to auto-fill missing entries. Only the missing strings get translated, your existing ones stay untouched. Example: npx intlayer translate:i18next --provider=chatgpt It will generate translations for missing keys in all your locales. Step 3 — Integrate in CI/CD You can plug it into your CI to make sure no new missing keys are introduced: npx intlayer test:i18next --ci If missing translations are found, it can fail the pipeline or just log warnings depending on your config. Bonus: Detect JSON changes via Git There’s even a (WIP) feature that detects which lines changed in your translation JSON using git diff, so it only re-translates what was modified. If you’re using Next.js Here’s a guide that explains how to set it up with next-i18next (based on i18next under the hood): 👉 https://intlayer.org/fr/blog/intlayer-with-next-i18next TL;DR Test missing translations automatically Auto-fill missing JSON entries using AI Integrate with CI/CDWorks with i18next <!-- SC_ON --> submitted by /u/AdmirableJackfruit59 (https://www.reddit.com/user/AdmirableJackfruit59)
[link] (https://intlayer.org/fr/blog/intlayer-with-next-i18next) [comments] (https://www.reddit.com/r/programming/comments/1oi5f42/how_to_test_and_replace_any_missing_translations/)
Faster Database Queries: Practical Techniques
https://www.reddit.com/r/programming/comments/1oi99yw/faster_database_queries_practical_techniques/

<!-- SC_OFF -->Published a new write-up on Medium, If you work on highly available & scalable systems, you might find it useful <!-- SC_ON --> submitted by /u/Trust_Me_Bro_4sure (https://www.reddit.com/user/Trust_Me_Bro_4sure)
[link] (https://kapillamba4.medium.com/faster-database-queries-practical-techniques-074ba9afdaa3) [comments] (https://www.reddit.com/r/programming/comments/1oi99yw/faster_database_queries_practical_techniques/)
Lessons from scaling live events at Patreon: modeling traffic, tuning performance, and coordinating teams
https://www.reddit.com/r/programming/comments/1oicq3e/lessons_from_scaling_live_events_at_patreon/

<!-- SC_OFF -->At Patreon, we recently scaled our platform to handle tens of thousands of fans joining live events at once. By modeling real user arrivals, tuning performance, and aligning across teams, we cut web load times by 57% and halved iOS startup requests. Here’s how we did it and what we learned about scaling real-time systems under bursty load:
https://www.patreon.com/posts/from-thundering-141679975 What are some surprising lessons you’ve learned from scaling a platform you've worked on? <!-- SC_ON --> submitted by /u/patreon-eng (https://www.reddit.com/user/patreon-eng)
[link] (https://www.patreon.com/posts/from-thundering-141679975) [comments] (https://www.reddit.com/r/programming/comments/1oicq3e/lessons_from_scaling_live_events_at_patreon/)