Byte by Byte
35 subscribers
14 photos
3 videos
1 file
51 links
Bite your bit of tech information and news here. Discuss in our Chip Chat group!
Download Telegram
V-JEPA 2 is the new iteration of the V-JEPA architecture, which according to Yann LeCun will replace Transformer-based LLMs for the so-called “World” models - models that understand the real world rather than mere words.
GOOGLE THAT’S NOT WHAT I MEANT
😁1🤣1
Whoa.

AMD Research has unveiled a cutting-edge method for generating procedural, fully customizable tree geometry - running entirely on GPU using work graphs and mesh nodes. Over 150 parameters control everything from structure to seasonal changes, pruning, wind response, animation, and real-time edits.

Performance:
• 3.13ms per frame (RX 7900 XTX)
• Geometry size reduced from 34.8GB to 51KB per frame
• Continuous LOD like UE5’s Nanite, targeting stable 120 FPS
• Work graph uses up to 1.5GB scratch buffer (varies by GPU)

Presentation (HPG 2025): YouTube ~7:14
Full paper: EG Digital Library

Chapeau to AMD Research and u/Bloodwyn1756 (Bastian Kuth)!
First IPC, then CIPC, then StiffGIPC, and now OGC: the University of Utah and NVIDIA have finally created a model for guaranteeing “penetration-free simulation of codimensional objects with minimal computational overhead”. Basically this means we can now accurately and efficiently simulate complex, real-world interactions with things like clothing and fabric.

Video paper here: https://youtu.be/xxyniqSLJik
👍1
A core reason for rewriting projects that have worked reliably for 20+ years in Rust seems to be giving developers the fun of fixing brand-new bugs.

In Ubuntu 25.10, unattended upgrades are broken due to a bug in the Rust reimplementation of GNU Coreutils. The only way to update is manually via apt update && apt upgrade.

https://bugs.launchpad.net/ubuntu/+source/unattended-upgrades/+bug/2129660
😁1🗿1
I don't know who needs to hear this but if you have a PS5 lying around that hasn't been recently updated, you can now play pirated games run homebrews on it pretty easily:

https://github.com/Gezine/Y2JB
One of the most complete and open breakdowns of how large language models are trained, covering scaling challenges, ablations, infrastructure design, GPU efficiency, and post-training pipelines.
Running the Linux kernel directly in your browser - without an emulator - is now a thing. Patches have been published that allow the kernel to be compiled straight into WebAssembly.

https://lore.kernel.org/lkml/618f3602-03aa-46a8-b2d4-3c9798c4cd2b@icemanor.se/

So how did the developers get around Wasm's inability to suspend tasks? They just spin up a new dedicated "CPU" (an actual Web Worker) for every single process and let the host OS do all the scheduling.

You can try out the live demo here: https://joelseverin.github.io/linux-wasm/
👏2
Turns out if you starve a transformer of weights, it starts revealing its secrets.
And those secrets? Clean, tiny, human-readable circuits.
Alan Dye is leaving Apple to join Meta.