Reddit Programming
210 subscribers
1.22K photos
124K links
I will send you newest post from subreddit /r/programming
Download Telegram
Structuring multi-agent AI systems efficiently
https://www.reddit.com/r/programming/comments/1oiypfv/structuring_multiagent_ai_systems_efficiently/

<!-- SC_OFF -->I’m experimenting with AI agents that must work across multiple messaging apps while remembering context. Using Photon, I could prototype quickly with less boilerplate. How do you usually structure multi-agent AI systems to make them modular, maintainable, and memory-aware? Any recommended patterns or frameworks? <!-- SC_ON --> submitted by /u/Fearless-Confusion-4 (https://www.reddit.com/user/Fearless-Confusion-4)
[link] (https://photon.codes/) [comments] (https://www.reddit.com/r/programming/comments/1oiypfv/structuring_multiagent_ai_systems_efficiently/)
Disasters I've seen in a microservices world, part II
https://www.reddit.com/r/programming/comments/1oj1i7i/disasters_ive_seen_in_a_microservices_world_part/

<!-- SC_OFF -->Four years ago, I wrote Disasters I've Seen in a Microservices World. I thought by now we'd have solved most of them. We didn't. We just learned to live with the chaos. The sequel is out. Four new "disasters” I've seen first-hand: #7 more services than engineers #8 the gateway to hell #9 technology sprawl #10 when the org chart becomes your architecture Does it sound familiar to you? <!-- SC_ON --> submitted by /u/joaoqalves (https://www.reddit.com/user/joaoqalves)
[link] (https://world.hey.com/joaoqalves/disasters-i-ve-seen-in-a-microservices-world-part-ii-9e6826bf) [comments] (https://www.reddit.com/r/programming/comments/1oj1i7i/disasters_ive_seen_in_a_microservices_world_part/)
"The Bug Hunt" blog post pattern
https://www.reddit.com/r/programming/comments/1oj9dkh/the_bug_hunt_blog_post_pattern/

<!-- SC_OFF -->This is Chapter 8 of the book "Writing for Developers: Blogs That Get Read" (published by Manning). And here's an ever-growing collection of “Bug Hunt” blog posts https://writethat.blog/?pattern=bug%20hunt <!-- SC_ON --> submitted by /u/swdevtest (https://www.reddit.com/user/swdevtest)
[link] (https://writethatblog.substack.com/p/the-bug-hunt-blog-post-pattern) [comments] (https://www.reddit.com/r/programming/comments/1oj9dkh/the_bug_hunt_blog_post_pattern/)
Vi /Vim Editor : Practical commands every developer, sysadmin, and DevOps engineer should know.
https://www.reddit.com/r/programming/comments/1ojbu0m/vi_vim_editor_practical_commands_every_developer/

<!-- SC_OFF -->I have put together a simple guide to vi commands that actually helped me all these years when editing configs or scripts on Linux.
Short, practical, and focused on real examples. Let me know if I have missed some..would love to take feedbacks and make it an exhaustive list! Read it here (https://medium.com/stackademic/the-80-20-guide-to-vi-editor-20-of-commands-that-do-80-of-the-work-ff1ce320f461?sk=b61c38fdc6a69ae5ba7400390964934b) <!-- SC_ON --> submitted by /u/sshetty03 (https://www.reddit.com/user/sshetty03)
[link] (https://medium.com/stackademic/the-80-20-guide-to-vi-editor-20-of-commands-that-do-80-of-the-work-ff1ce320f461?sk=b61c38fdc6a69ae5ba7400390964934b) [comments] (https://www.reddit.com/r/programming/comments/1ojbu0m/vi_vim_editor_practical_commands_every_developer/)
Educational Benchmark: 100 Million Records with Mobile Logic Compression (Python + SQLite + Zlib)
https://www.reddit.com/r/programming/comments/1ojc99d/educational_benchmark_100_million_records_with/

<!-- SC_OFF -->Introduction This is an educational and exploratory experiment on how Python can handle large volumes of data by applying logical and semantic compression, a concept I called LSC (Logical Semantic Compression). The proposal was to generate 100 million structured records and store them in compressed blocks, using only Python, SQLite and Zlib — without parallelism and without high-performance external libraries. ⚙️ Environment Configuration Device: Android (via Termux) Language: Python 3 Database: SQLite Compression: zlib Mode: Singlecore Total records: 100,000,000 Batch: 1,000 records per chunk Periodic commits: every 3 chunks 🧩 Logical Structure Each record generated follows a simple semantic pattern: { "id": i, "title": f"Book {i}", "author": "random letter string", "year": number between 1950 and 2024, "category": "Romance/Science/History" } These records are grouped into chunks and, before being stored in the database, they are converted into JSON and compressed with zlib. Each block represents a “logical package” — a central concept in LSC. ⚙️ Main Excerpt from the Code json_bytes = json.dumps(batch, separators=(',', ':')).encode() comp_blob = zlib.compress(json_bytes, ZLIB_LEVEL) cur.execute( "INSERT INTO chunks (start_id, end_id, blob, count) VALUES (?, ?, ?, ?)", (i - BATCH_SIZE + 1, i, sqlite3.Binary(comp_blob), len(batch)) ) The code executes: Semantic generation of records JSON Serialization Logic compression (Zlib) Writing to SQLite 🚀 Benchmark Results Result Metric 📊 100,000,000 records generated 🧩 Chunks processed 100,000 📦 Compressed size ~2 GB 📤 Uncompressed size ~10 GB ⚙️ Compression ratio ~20% ⏱️ Total time ~50 seconds (approx.) Average speed ~200,000 records/s 🔸 Singlecore Mode (CPU-bound) 🔬 Observations Even though it was run on a smartphone, the result was surprisingly stable. The compression rate remained close to 20%, with minimal variation between blocks. This demonstrates that, with a good logical data structure, it is possible to achieve considerable efficiency without resorting to parallelism or optimizations in C/C++. 🧠 About LSC LSC (Logical Semantic Compression) is not a library, but an idea: Compress data based on its logical structure and semantic repetition, not just in the raw bytes. Thus, each block carries not only information, but also relationships and coherence between records. Compression becomes a reflection of the meaning of the data — not just its size. 🎓 Conclusion Even running in singlecore mode and with simple configurations, Python showed that it is possible to handle 100 million structured records, maintaining consistent compression and low fragmentation. 🔍 This experiment reinforces the idea that the logical organization of data can be as powerful as technical optimization. <!-- SC_ON --> submitted by /u/MajorPistola (https://www.reddit.com/user/MajorPistola)
[link] (https://www.reddit.com/r/datascience/comments/1oj8ufa/educational_benchmark_100_million_records_with/?share_id=9ZYXguvpWPZkl3y91a5Xg&utm_content=1&utm_medium=android_app&utm_name=androidcss&utm_source=share&utm_term=1) [comments] (https://www.reddit.com/r/programming/comments/1ojc99d/educational_benchmark_100_million_records_with/)
Surf update: new TLS fingerprints for Firefox 144
https://www.reddit.com/r/programming/comments/1ojd7rf/surf_update_new_tls_fingerprints_for_firefox_144/

<!-- SC_OFF -->An update to Surf (https://github.com/enetx/surf), the browser-impersonating HTTP client for Go. The latest version adds support for new TLS fingerprints that match the behavior of the following clients: Firefox 144 Firefox 144 in Private Mode These fingerprints include accurate ordering of TLS extensions, signature algorithms, supported groups, cipher suites, and use the correct GREASE and key share behavior. JA3 and JA4 hashes match the real browsers, including JA4-R and JA4-O. HTTP/2 Akamai fingerprinting is also consistent. Both standard and private modes are supported with full fidelity, including support for FakeRecordSizeLimit, CompressCertificate with zlib, brotli and zstd, and X25519 with MLKEM768 hybrid key exchange. The update also improves compatibility with TLS session resumption, hybrid key reuse and encrypted client hello for Tor-like traffic. Let me know if you find any mismatches or issues with the new fingerprints. <!-- SC_ON --> submitted by /u/Affectionate_Type486 (https://www.reddit.com/user/Affectionate_Type486)
[link] (https://github.com/enetx/surf) [comments] (https://www.reddit.com/r/programming/comments/1ojd7rf/surf_update_new_tls_fingerprints_for_firefox_144/)
The average codebase is now 50% dependencies — is this sustainable?
https://www.reddit.com/r/programming/comments/1ojdrv9/the_average_codebase_is_now_50_dependencies_is/

<!-- SC_OFF -->I saw an internal report showing that most projects spend more effort patching dependencies than writing application logic.
Is “build less, depend more” reaching a breaking point? <!-- SC_ON --> submitted by /u/Legitimate_Sun1783 (https://www.reddit.com/user/Legitimate_Sun1783)
[link] (https://www.intel.com/content/www/us/en/developer/articles/guide/the-careful-consumption-of-open-source-software.html?utm_source=chatgpt.com) [comments] (https://www.reddit.com/r/programming/comments/1ojdrv9/the_average_codebase_is_now_50_dependencies_is/)
is most Agile/Scrum a pale imitation , a hollow shell.
https://www.reddit.com/r/programming/comments/1ojp9sa/is_most_agilescrum_a_pale_imitation_a_hollow_shell/

<!-- SC_OFF -->after working on agile/scrum for years, unfortunately, I seldom see an authentic realization of Agile/Scrum, this doesn't mean I am not a devout believer, it is just that Agile/Scrum is far more than a paradigm shift at the project level, it is actually a culture, and even personality change in its core. It's easy to imitate a scrum in outward form, it is hard to walk the walk. the very core philosophy of agile is Team is the key to adaption, to flexibility, to excellence, that's why we add an extra role of scrum master to engineer the dev team to act like this, that's why we hold the Team rather than individual leader (PO or SM) responsible for the delivery, that's why we employ servant leadership instead of authoritarian leadership, setting aside all other considerations absolutely, only this responsibility change would make Agile a mission impossible for most organizations. because in Scrum, the responsibility of in-time,on-budget delivery rest with the Whole team, not PO, not SM, which gives rise to a series of issues: at the grass root level, to have a productive daily standup requires each developer to be proactive, communicative, collaborative, and business-aware team players, Seriously? you basically change the persona of the tech team( remember there is a thing called Nerd for a reason), you expect them become sales person or consultant overnight? you are changing the personality, not process or method. at the senior management level, you tell the management it is the whole team, not an individual responsible for a failed project? who should he fire if need? therefore, in reality, SM becomes either a scribe or assistant PM, PO still is in charge of the project, dev team just feels it's another show for functionally alliterate management. <!-- SC_ON --> submitted by /u/DK_ZJJ0801 (https://www.reddit.com/user/DK_ZJJ0801)
[link] (https://www.reddit.com/) [comments] (https://www.reddit.com/r/programming/comments/1ojp9sa/is_most_agilescrum_a_pale_imitation_a_hollow_shell/)
Help needed: multilayer cipher (Caesar → Atbash → Keyword("keyword") → Rail Fence(k=2)) — brute force failed
https://www.reddit.com/r/programming/comments/1ojr7c2/help_needed_multilayer_cipher_caesar_atbash/

<!-- SC_OFF -->Hi everyone — I’ve been stuck on this multilayer cipher and could use a fresh pair of eyes. Ciphertext (final encrypted output): fm f zfprz md yrqvdjny ypz v wnnu dq uvx wvw yfz vdpuanf djpohy fv mavqlznu m qaf bdnvdu wnwafjh dvnm w udw dqd udcfdnf fnu ypz w udovydfaud y fonqcbx f rddhnmwpud zm d y yndrdqqvwafjh dnfsynarddb ywvwxdrudnf ukhfpn fdjd drvzhmaqnu wnw f f njpnfpad ddfyvcnfofmzfbtfb rjf f zfpcd qdndxq dzfvan xqb wntq rwxdrndxq dmv v tvwpzmwd dfd py lq qn tkvfuapnwxfznz qd What I was told about the encryption chain (in order of application, plaintext → ciphertext): Caesar (unknown shift) Atbash Keyword cipher using keyword: "keyword" Rail Fence with key = 2, offset = 0 So to decrypt you would reverse those steps: RailFence⁻¹ → Keyword⁻¹ → Atbash⁻¹ → Caesar⁻¹. What I’ve already tried (exhaustively / algorithmically): Rail Fence decrypt with key=2 applied both to: the full text including spaces/punctuation, and letters-only (i.e., remove non-letters before rail-fence and keep letters-only through subsequent steps). Rail starting offsets: tried start rail = 0 and start rail = 1. Keyword cipher variants using keyword: keyed alphabets tried: keyword + remaining, remaining + keyword, keyword + reversed(remaining), reversed(keyed), and a variant that merges i/j (classic Playfair-style handling). both mapping conventions (i.e., whether keyed alphabet was used as the ciphertext alphabet or as the plaintext alphabet). Atbash (standard az) as implemented normally. Caesar shifts all 0–25 (brute force). Two orders for the last steps (Atbash→Caesar and Caesar→Atbash) — just in case the description differed slightly. Scored outputs by English heuristics (common-word matches + frequency) and inspected the top candidates. What I found:
None of the automated attempts produced a clean, obviously-correct English plaintext. A few high-scoring candidates contained fragments that look like corrupted English (words or partial words), which suggests we’re close but some parameter (keyword alphabet construction, punctuation handling at Rail-Fence, or Caesar specifics) is still different than assumed. Here’s what I’m asking for (helpful test ideas): If you want to try it, please post your decryption attempt and include: which Rail-Fence interpretation you used (letters-only vs fulltext, start rail 0/1), which keyword alphabet construction (exact keyed alphabet string), and whether you assumed the keyword mapped plaintext→cipher or cipher→plaintext, whether you merged i/j or otherwise modified the alphabet, whether you applied Atbash before or after Caesar, and the Caesar shift you tried. If you have a favorite solver or hill-climbing/substitution solver (e.g., simulated annealing for monoalphabetic substitution), please try it on the state after reversing the Rail Fence and Keyword steps (or on earlier/later intermediate states) — I’ve exhausted the simple brute-force space and an intelligent substitution solver might find the monoalphabetic mapping. If this looks like it could be non-English or contains a substitution variant I didn’t try (like keyed Vigenère instead of a simple keyword monoalphabetic), please say so — I’m open to that possibility. Extra context: I can provide intermediate states (the Rail-Fence-decoded result for letters-only / full-text, etc.) if that helps — say which variant you want and I’ll paste it. Thanks in advance — any idea, clue, or direction is appreciated! <!-- SC_ON --> submitted by /u/Front-Cabinet4945 (https://www.reddit.com/user/Front-Cabinet4945)
[link] (https://www.reddit.com/r/programming/submit/?type=LINK) [comments] (https://www.reddit.com/r/programming/comments/1ojr7c2/help_needed_multilayer_cipher_caesar_atbash/)