Reddit Programming
201 subscribers
1.22K photos
126K links
I will send you newest post from subreddit /r/programming
Download Telegram
New open-source multi-agent framework in Go — interesting alternative to Python-based AI tooling
https://www.reddit.com/r/programming/comments/1rpp76t/new_opensource_multiagent_framework_in_go/

<!-- SC_OFF -->Came across this new open-source project in Go, implementing a multi-agent orchestration framework (similar spirit to some Python AI agent libraries, but Go-native). Notable aspects: agents + team coordination chain-of-thought style workflows clean Go modular structure built-in examples + docs optional dashboard GitHub: github.com/Ecook14/gocrewwai Nice to see something exploring the AI tooling space outside the usual Python ecosystem. <!-- SC_ON --> submitted by /u/Top-Question2614 (https://www.reddit.com/user/Top-Question2614)
[link] (http://github.com/Ecook14/gocrewwai) [comments] (https://www.reddit.com/r/programming/comments/1rpp76t/new_opensource_multiagent_framework_in_go/)
here's how I used Alchemy to improve AI Auditing.
https://www.reddit.com/r/programming/comments/1rppnfa/heres_how_i_used_alchemy_to_improve_ai_auditing/

<!-- SC_OFF -->https://github.com/Nefza99/Rebis-AI-auditing-Architecture Instead of: input → model → output → moderation the system introduces a multi-stage transformation pipeline: input → staged reasoning checkpoints → validated decision → output Each stage can: evaluate reasoning quality detect bias signals enforce policy thresholds log decision context halt unsafe transformations This makes the system behave more like a decision refinery rather than a simple generation pipeline. The goal isn’t to replace existing guardrails, but to provide a structured governance layer inside the decision process itself. Nefza99/Rebis-AI-auditing-Architecture: An alchemical state machine for AI governance. Guides AI decisions through 18 staged checkpoints with circuit breakers, audit trails, and role-based access control. Blocked candidates get auto-remediation suggestions. Features: full audit logging, RBAC, configurable thresholds, cycle limits, and notifications. Perfect for internal Google AI ethics audits. Why use alchemy as a framework? This project isn’t claiming medieval alchemy contained hidden AI knowledge. What interested me is that alchemists developed highly structured transformation sequences long before modern systems engineering existed. What surprised me while building this is that medieval alchemy actually describes a very functional decision architecture. Alchemy wasn’t just about turning lead into gold — it was about staged transformation processes with strict checkpoints. Each stage had to pass verification before the next transformation could occur. That maps almost perfectly to modern AI governance problems. Most AI systems today produce outputs without a structured transformation pipeline for evaluating reasoning quality, bias risk, or policy violations. The Rebis model treats decision-making like an alchemical cycle: Separation → isolate signals from noise Purification → remove bias and faulty reasoning Conjunction → integrate multiple perspectives Fixation → produce stable conclusions Instead of a single pass decision, the architecture forces reasoning through 18 structured states, each acting like a circuit breaker if something fails validation. What emerges is something closer to a philosophical operating system for AI auditing. Not mystical — just a surprisingly elegant control system hidden in historical process theory. Simplified Architecture (Rebis AI Audit Cycle) Think of the system like a circular decision refinery rather than a single pass output. INPUT ↓ [1] Separation isolate signal from noise [2] Calcination stress-test assumptions [3] Dissolution break reasoning into components [4] Purification remove bias or faulty logic [5] Conjunction integrate multiple perspectives [6] Distillation refine candidate solutions [7] Fixation lock stable decision ↓ AUDIT + LOGGING ↓ OUTPUT Each stage has validation gates. If a stage fails: FAIL → circuit breaker → remediation suggestions → retry This creates a state-machine style governance loop where decisions cannot progress unless they pass verification checks. The result is: traceable reasoning auditable decisions bias reduction checkpoints controlled decision escalation Instead of “AI thinks → AI answers”, the system enforces structured transformation of reasoning before output. How this differs from existing AI guardrail systems Most AI safety or governance tools today fall into a few categories: Prompt filters Systems that block certain inputs before the model runs. Output moderation Classifiers that check responses after generation. Policy layers / rule engines Frameworks that enforce predefined safety rules. These approaches work, but they usually operate before or after the reasoning + process. The Rebis architecture focuses on something slightly different: governing the transformation of reasoning itself.
Curious what people working on AI governance or alignment think about staged decision pipelines like this. <!-- SC_ON --> submitted by /u/Smooth-Horror1527 (https://www.reddit.com/user/Smooth-Horror1527)
[link] (https://share.google/5woTTuZ7OCShuVL78) [comments] (https://www.reddit.com/r/programming/comments/1rppnfa/heres_how_i_used_alchemy_to_improve_ai_auditing/)
What’s in high demand for freelancers and easiest for beginners to start?
https://www.reddit.com/r/programming/comments/1rppoqy/whats_in_high_demand_for_freelancers_and_easiest/

<!-- SC_OFF -->A friend suggested that web frontend, backend, maybe fullstack, or app development (Android/iOS) are the easiest to learn as a beginner and are also in demand. Is this true? How should I decide which one to choose, and where can I learn it? <!-- SC_ON --> submitted by /u/Hot-Advisor-3353 (https://www.reddit.com/user/Hot-Advisor-3353)
[link] (http://question.com/) [comments] (https://www.reddit.com/r/programming/comments/1rppoqy/whats_in_high_demand_for_freelancers_and_easiest/)
What it costs to run 1M image search in production
https://www.reddit.com/r/programming/comments/1rpx54f/what_it_costs_to_run_1m_image_search_in_production/

<!-- SC_OFF -->I priced out every piece of infrastructure for running CLIP-based image search on 1M images in production GPU inference is 80% of the bill. A g6.xlarge running OpenCLIP ViT-H/14 costs $588/month and handles 50-100 img/s. CPU inference gets you 0.2 img/s which is not viable Vector storage is cheap. 1M vectors at 1024 dims is 4.1 GB. Pinecone $50-80/month, Qdrant $65-102, pgvector on RDS $260-270. Even the expensive option is small compared to GPU S3 + CloudFront: under $25/month for 500 GB of images Backend: a couple t3.small instances behind an ALB with auto scaling. $57-120/month Totals: Moderate traffic (~100K searches/day): $740/month Enterprise (~500K+ searches/day): $1,845/month <!-- SC_ON --> submitted by /u/K3NCHO (https://www.reddit.com/user/K3NCHO)
[link] (https://vecstore.app/blog/what-it-costs-to-search-1m-images) [comments] (https://www.reddit.com/r/programming/comments/1rpx54f/what_it_costs_to_run_1m_image_search_in_production/)
simple-git npm package has a CVSS 9.8 RCE. 5M+ weekly downloads. check your lockfiles.
https://www.reddit.com/r/programming/comments/1rqldot/simplegit_npm_package_has_a_cvss_98_rce_5m_weekly/

<!-- SC_OFF -->CVE-2026-28292. remote code execution through a case-sensitivity bypass. found the writeup at https://www.codeant.ai/security-research/security-research-simple-git-remote-code-execution-cve-2026-28292 simple-git is everywhere, CI/CD pipelines, deploy scripts, automation tools. the kind of dependency you forget you have until something like this drops. <!-- SC_ON --> submitted by /u/Amor_Advantage_3 (https://www.reddit.com/user/Amor_Advantage_3)
[link] (https://www.codeant.ai/security-research/security-research-simple-git-remote-code-execution-cve-2026-28292) [comments] (https://www.reddit.com/r/programming/comments/1rqldot/simplegit_npm_package_has_a_cvss_98_rce_5m_weekly/)