Reddit Programming
201 subscribers
1.22K photos
126K links
I will send you newest post from subreddit /r/programming
Download Telegram
Sandboxing untrusted JavaScript with QuickJS and WebAssembly (25ms cold start)
https://www.reddit.com/r/programming/comments/1rkvsdc/sandboxing_untrusted_javascript_with_quickjs_and/

<!-- SC_OFF -->Recently I needed a safe and lightweight way to run untrusted code without containers or long-lived workers. Ended up using QuickJS compiled to WASM with a minimal host bridge. Cold starts are ~25 ms in my tests. Short write-up of the approach:
https://gace.dev/blog/sandboxing-untrusted-js <!-- SC_ON --> submitted by /u/baderbc (https://www.reddit.com/user/baderbc)
[link] (https://gace.dev/blog/sandboxing-untrusted-js) [comments] (https://www.reddit.com/r/programming/comments/1rkvsdc/sandboxing_untrusted_javascript_with_quickjs_and/)
The Illusion of Building
https://www.reddit.com/r/programming/comments/1rlgcs9/the_illusion_of_building/

<!-- SC_OFF -->I keep seeing posts like this going viral: "I built a mobile app with no coding experience." "I cloned Spotify in a weekend." Building an app and engineering a system are two different activities, but people keep confusing them. AI has made the first dramatically cheaper. It hasn't touched the second. I spent some time reflecting on what's actually happening here. What "building software" means, what it doesn't, and why everyone is asking the wrong question. <!-- SC_ON --> submitted by /u/No_Zookeepergame7552 (https://www.reddit.com/user/No_Zookeepergame7552)
[link] (https://uphack.io/blog/post/the-illusion-of-building/) [comments] (https://www.reddit.com/r/programming/comments/1rlgcs9/the_illusion_of_building/)
Takeaways from a live dashboard of 150+ feeds that doesn't melt your browser
https://www.reddit.com/r/programming/comments/1rlpjjl/takeaways_from_a_live_dashboard_of_150_feeds_that/

<!-- SC_OFF -->I've been reading through the architecture of World Monitor (https://github.com/koala73/worldmonitor), an open-source real-time intelligence dashboard that fuses 150+ RSS feeds, conflict databases, and etc. into a single interactive map with 40+ data layers. Here are some interesting points that you can refer to if you're building anything similar. Data sources RSS feeds span 15 categories across 150+ entries: Wire services & major outlets: Reuters, AP News, BBC World, Guardian, CNN, France 24, Al Jazeera, SCMP, Nikkei Asia Regional: Kyiv Independent, Meduza, Haaretz, Arab News, Premium Times (Nigeria), Folha de S.Paulo, Animal Politico (Mexico), Yonhap (Korea), VnExpress (Vietnam) Government & institutional: White House, State Dept, Pentagon, FEMA, Federal Reserve, SEC, CDC, UN News, CISA, IAEA, WHO, UNHCR Defense & OSINT: Defense One, Breaking Defense, The War Zone, Janes, USNI News, Bellingcat, Oryx, Krebs on Security Think tanks: Foreign Affairs, Atlantic Council, CSIS, RAND, Brookings, Carnegie, RUSI, War on the Rocks, Jamestown Foundation Finance & energy: CNBC, MarketWatch, Financial Times, Yahoo Finance, Reuters Energy, Oil Price / LNG Structured APIs beyond RSS: ACLED: battles, explosions, violence against civilians UCDP: georeferenced conflict events GDELT: global event intelligence and protest tracking NASA FIRMS: satellite fire detection via VIIRS AISStream: live vessel positions via WebSocket OpenSky Network: military aircraft positions and callsigns Cloudflare Radar: internet outage severity by country FRED / EIA / Finnhub: economic indicators, energy data, market prices abuse.ch / AlienVault OTX / AbuseIPDB: cyber threat intelligence HAPI/HDX: humanitarian conflict event counts Ingestion Instead of each browser firing ~70 outbound requests per page load, a single edge function fetches all feeds in batches of 20 with a 25-second hard deadline. Two-layer caching (per-feed at 600s, assembled digest at 900s) means every client for the next 15 minutes gets the cached result. For 20 concurrent users, that's 1 upstream invocation instead of 1,400 individual feed fetches. Two-pass anomaly detection Fast pass: Rolling keyword frequency against a 7-day baseline. A term "spikes" when its 2-hour count exceeds 3x the daily average across 2+ sources. Cold-start terms (no baseline) are capped at 0.8 confidence to prevent them from outranking established signals. Heavy pass: Only spiked terms go through ML entity classification (NER) - running entirely in-browser via ONNX Runtime in a Web Worker. Zero server cost but constrained by model size and cold-start latency. Falls back to regex extraction (CVEs, APT group names, world leaders) when ML is unavailable. Welford's algorithm for temporal baselines "Is 47 military flights over the Black Sea unusual for a Tuesday in March?" Answering this requires per-signal, per-region, per-weekday, per-month statistics. Instead of storing full history, they use Welford's online algorithm: exact running mean and variance from just 3 numbers per key (mean, m2, sample count). Z-scores map to severity. Anomaly detection only activates after 10 samples to avoid flagging the first observation against a zero-variance baseline. Tradeoffs/Design Choices: Hand-tuned scoring weights instead of learned parameters (no labeled dataset exists) Fixed z-score thresholds on non-normal distributions (pragmatic but theoretically wrong - proper treatment would use Poisson/negative binomial) Browser-side ML caps model complexity but eliminates GPU infrastructure costs Zoom gating means information loss - a priority-based layer budget would be better <!-- SC_ON --> submitted by /u/noninertialframe96 (https://www.reddit.com/user/noninertialframe96)