AMD Publishes Open-Source Driver for GPU Virtualization, Radeon "In the Roadmap" (Score: 151+ in 9 hours)
Link: https://readhacker.news/s/6tisT
Comments: https://readhacker.news/c/6tisT
Link: https://readhacker.news/s/6tisT
Comments: https://readhacker.news/c/6tisT
Phoronix
AMD Publishes Open-Source GIM Driver For GPU Virtualization, Radeon "In The Roadmap"
AMD has published as open-source their 'GPU-IOV Module' used for virtualization with Instinct accelerators
Instant SQL for results as you type in DuckDB UI (🔥 Score: 150+ in 3 hours)
Link: https://readhacker.news/s/6tjeG
Comments: https://readhacker.news/c/6tjeG
Link: https://readhacker.news/s/6tjeG
Comments: https://readhacker.news/c/6tjeG
MotherDuck
Instant SQL is here: Speedrun ad-hoc queries as you type - MotherDuck Blog
Type, see, tweak, repeat! Instant SQL is now in Preview in MotherDuck and the DuckDB Local UI. Bend reality with SQL superpowers to get real-time query results as you type.
Manufactured Consensus on X.com (🔥 Score: 154+ in 2 hours)
Link: https://readhacker.news/s/6tk3v
Comments: https://readhacker.news/c/6tk3v
Link: https://readhacker.news/s/6tk3v
Comments: https://readhacker.news/c/6tk3v
rook2root.co
Manufactured consensus on x.com
How algorithm-driven influence quietly replaces genuine discourse with engineered popularity—no fake users, no overt propaganda.
Assignment 5: Cars and Key Fobs (Score: 151+ in 9 hours)
Link: https://readhacker.news/s/6tiKn
Comments: https://readhacker.news/c/6tiKn
Link: https://readhacker.news/s/6tiKn
Comments: https://readhacker.news/c/6tiKn
One quantum transition makes light at 21 cm (Score: 153+ in 4 hours)
Link: https://readhacker.news/s/6tjY3
Comments: https://readhacker.news/c/6tjY3
Link: https://readhacker.news/s/6tjY3
Comments: https://readhacker.news/c/6tjY3
Big Think
Why 21 cm is our Universe's "magic length"
Photons come in every wavelength you can imagine. But one particular quantum transition makes light at precisely 21 cm, and it's magical.
NSF director to resign amid grant terminations, job cuts, and controversy (🔥 Score: 154+ in 2 hours)
Link: https://readhacker.news/s/6tkui
Comments: https://readhacker.news/c/6tkui
Link: https://readhacker.news/s/6tkui
Comments: https://readhacker.news/c/6tkui
Science
Exclusive: NSF director to resign amid grant terminations, job cuts, and controversy
“I have done all I can,” says Sethuraman Panchanathan, a Trump appointee who has led agency since 2020
OpenAI releases image generation in the API (🔥 Score: 150+ in 2 hours)
Link: https://readhacker.news/s/6tkxU
Comments: https://readhacker.news/c/6tkxU
Link: https://readhacker.news/s/6tkxU
Comments: https://readhacker.news/c/6tkxU
Openai
Introducing our latest image generation model in the API
Our latest image generation model is now available in the API via ‘gpt-image-1’—enabling developers and businesses to build professional-grade, customizable visuals directly into their own tools and platforms.
Ask HN: Share your AI prompt that stumps every model (Score: 150+ in 9 hours)
Link: https://readhacker.news/c/6tjcM
I had an idea for creating a crowdsourced database of AI prompts that no AI model could yet crack (wanted to use some of them as we're adding new models to Kilo Code).
I've seen a bunch of those prompts scattered across HN, so thought to open a thread here so we can maybe have a centralied location for this.
Share your prompt that stumps every AI model here.
Link: https://readhacker.news/c/6tjcM
I had an idea for creating a crowdsourced database of AI prompts that no AI model could yet crack (wanted to use some of them as we're adding new models to Kilo Code).
I've seen a bunch of those prompts scattered across HN, so thought to open a thread here so we can maybe have a centralied location for this.
Share your prompt that stumps every AI model here.
Creating your own federated microblog (Score: 150+ in 13 hours)
Link: https://readhacker.news/s/6tiHK
Comments: https://readhacker.news/c/6tiHK
Link: https://readhacker.news/s/6tiHK
Comments: https://readhacker.news/c/6tiHK
fedify.dev
Creating your own federated microblog | Fedify
In this tutorial, we will build a small microblog that implements the ActivityPub protocol, similar to Mastodon or Misskey, using Fedify, an ActivityPub server framework.
OpenVSX, which VSCode forks rely on for extensions, down for 24 hours (Score: 150+ in 10 hours)
Link: https://readhacker.news/s/6tk5H
Comments: https://readhacker.news/c/6tk5H
Link: https://readhacker.news/s/6tk5H
Comments: https://readhacker.news/c/6tk5H
status.open-vsx.org
Eclipse Foundation status
Welcome to Eclipse Foundation status page for real-time and historical data on system performance.
DeepMind releases Lyria 2 music generation model (🔥 Score: 158+ in 3 hours)
Link: https://readhacker.news/s/6tmFX
Comments: https://readhacker.news/c/6tmFX
Link: https://readhacker.news/s/6tmFX
Comments: https://readhacker.news/c/6tmFX
Google DeepMind
Music AI Sandbox, now with new features and broader access
Google has long collaborated with musicians, producers, and artists in the research and development of music AI tools. Ever since launching the Magenta project, in 2016, we’ve been exploring how...
You Can Be a Great Designer and Be Completely Unknown (Score: 150+ in 10 hours)
Link: https://readhacker.news/s/6tkUN
Comments: https://readhacker.news/c/6tkUN
Link: https://readhacker.news/s/6tkUN
Comments: https://readhacker.news/c/6tkUN
Chrbutler
You Can Be a Great Designer and Be Completely Unknown - Christopher Butler
I often find myself contemplating the greatest creators in history — those rare artists, designers, and thinkers whose work transformed how we see
National Airspace System Status (Score: 150+ in 10 hours)
Link: https://readhacker.news/s/6tkVL
Comments: https://readhacker.news/c/6tkVL
Link: https://readhacker.news/s/6tkVL
Comments: https://readhacker.news/c/6tkVL
nasstatus.faa.gov
National Airspace System
The Federal Aviation Administration's National Airspace System (NAS) dashboard
Microsoft subtracts C/C++ extension from VS Code forks (Score: 150+ in 10 hours)
Link: https://readhacker.news/s/6tm4P
Comments: https://readhacker.news/c/6tm4P
Link: https://readhacker.news/s/6tm4P
Comments: https://readhacker.news/c/6tm4P
The Register
Devs sound alarm after Microsoft subtracts C/C++ extension from VS Code forks
: Cursor, Codium makers lose access as add-on goes exclusive
Scientists Develop Artificial Leaf, Uses Sunlight to Produce Valuable Chemicals (Score: 150+ in 11 hours)
Link: https://readhacker.news/s/6tm3x
Comments: https://readhacker.news/c/6tm3x
Link: https://readhacker.news/s/6tm3x
Comments: https://readhacker.news/c/6tm3x
Berkeley Lab News Center
Scientists Develop Artificial Leaf That Uses Sunlight to Produce Valuable Chemicals
Researchers built a device made of perovskite and copper that mimics a green leaf.
Show HN: Lemon Slice Live – Have a video call with a transformer model (Score: 150+ in 17 hours)
Link: https://readhacker.news/c/6tk5N
Hey HN, this is Lina, Andrew, and Sidney from Lemon Slice. We’ve trained a custom diffusion transformer (DiT) model that achieves video streaming at 25fps and wrapped it into a demo that allows anyone to turn a photo into a real-time, talking avatar. Here’s an example conversation from co-founder Andrew: https://www.youtube.com/watch?v=CeYp5xQMFZY. Try it for yourself at: https://lemonslice.com/live.
(Btw, we used to be called Infinity AI and did a Show HN under that name last year: https://news.ycombinator.com/item?id=41467704.)
Unlike existing avatar video chat platforms like HeyGen, Tolan, or Apple Memoji filters, we do not require training custom models, rigging a character ahead of time, or having a human drive the avatar. Our tech allows users to create and immediately video-call a custom character by uploading a single image. The character image can be any style - from photorealistic to cartoons, paintings, and more.
To achieve this demo, we had to do the following (among other things! but these were the hardest):
1. Training a fast DiT model. To make our video generation fast, we had to both design a model that made the right trade-offs between speed and quality, and use standard distillation approaches. We first trained a custom video diffusion transformer (DiT) from scratch that achieves excellent lip and facial expression sync to audio. To further optimize the model for speed, we applied teacher-student distillation. The distilled model achieves 25fps video generation at 256-px resolution. Purpose-built transformer ASICs will eventually allow us to stream our video model at 4k resolution.
2. Solving the infinite video problem. Most video DiT models (Sora, Runway, Kling) generate 5-second chunks. They can iteratively extend it by another 5sec by feeding the end of the 1st chunk into the start of the 2nd in an autoregressive manner. Unfortunately the models experience quality degradation after multiple extensions due to accumulation of generation errors. We developed a temporal consistency preservation technique that maintains visual coherence across long sequences. Our technique significantly reduces artifact accumulation and allows us to generate indefinitely-long videos.
3. A complex streaming architecture with minimal latency. Enabling an end-to-end avatar zoom call requires several building blocks, including voice transcription, LLM inference, and text-to-speech generation in addition to video generation. We use Deepgram as our AI voice partner. Modal as the end-to-end compute platform. And Daily.co and Pipecat to help build a parallel processing pipeline that orchestrates everything via continuously streaming chunks. Our system achieves end-to-end latency of 3-6 seconds from user input to avatar response. Our target is <2 second latency.
More technical details here: https://lemonslice.com/live/technical-report.
Current limitations that we want to solve include: (1) enabling whole-body and background motions (we’re training a next-gen model for this), (2) reducing delays and improving resolution (purpose-built ASICs will help), (3) training a model on dyadic conversations so that avatars learn to listen naturally, and (4) allowing the character to “see you” and respond to what they see to create a more natural and engaging conversation.
We believe that generative video will usher in a new media type centered around interactivity: TV shows, movies, ads, and online courses will stop and talk to us. Our entertainment will be a mixture of passive and active experiences depending on what we’re in the mood for. Well, prediction is hard, especially about the future, but that’s how we see it anyway!
We’d love for you to try out the demo and let us know what you think! Post your characters and/or conversation recordings below.
Link: https://readhacker.news/c/6tk5N
Hey HN, this is Lina, Andrew, and Sidney from Lemon Slice. We’ve trained a custom diffusion transformer (DiT) model that achieves video streaming at 25fps and wrapped it into a demo that allows anyone to turn a photo into a real-time, talking avatar. Here’s an example conversation from co-founder Andrew: https://www.youtube.com/watch?v=CeYp5xQMFZY. Try it for yourself at: https://lemonslice.com/live.
(Btw, we used to be called Infinity AI and did a Show HN under that name last year: https://news.ycombinator.com/item?id=41467704.)
Unlike existing avatar video chat platforms like HeyGen, Tolan, or Apple Memoji filters, we do not require training custom models, rigging a character ahead of time, or having a human drive the avatar. Our tech allows users to create and immediately video-call a custom character by uploading a single image. The character image can be any style - from photorealistic to cartoons, paintings, and more.
To achieve this demo, we had to do the following (among other things! but these were the hardest):
1. Training a fast DiT model. To make our video generation fast, we had to both design a model that made the right trade-offs between speed and quality, and use standard distillation approaches. We first trained a custom video diffusion transformer (DiT) from scratch that achieves excellent lip and facial expression sync to audio. To further optimize the model for speed, we applied teacher-student distillation. The distilled model achieves 25fps video generation at 256-px resolution. Purpose-built transformer ASICs will eventually allow us to stream our video model at 4k resolution.
2. Solving the infinite video problem. Most video DiT models (Sora, Runway, Kling) generate 5-second chunks. They can iteratively extend it by another 5sec by feeding the end of the 1st chunk into the start of the 2nd in an autoregressive manner. Unfortunately the models experience quality degradation after multiple extensions due to accumulation of generation errors. We developed a temporal consistency preservation technique that maintains visual coherence across long sequences. Our technique significantly reduces artifact accumulation and allows us to generate indefinitely-long videos.
3. A complex streaming architecture with minimal latency. Enabling an end-to-end avatar zoom call requires several building blocks, including voice transcription, LLM inference, and text-to-speech generation in addition to video generation. We use Deepgram as our AI voice partner. Modal as the end-to-end compute platform. And Daily.co and Pipecat to help build a parallel processing pipeline that orchestrates everything via continuously streaming chunks. Our system achieves end-to-end latency of 3-6 seconds from user input to avatar response. Our target is <2 second latency.
More technical details here: https://lemonslice.com/live/technical-report.
Current limitations that we want to solve include: (1) enabling whole-body and background motions (we’re training a next-gen model for this), (2) reducing delays and improving resolution (purpose-built ASICs will help), (3) training a model on dyadic conversations so that avatars learn to listen naturally, and (4) allowing the character to “see you” and respond to what they see to create a more natural and engaging conversation.
We believe that generative video will usher in a new media type centered around interactivity: TV shows, movies, ads, and online courses will stop and talk to us. Our entertainment will be a mixture of passive and active experiences depending on what we’re in the mood for. Well, prediction is hard, especially about the future, but that’s how we see it anyway!
We’d love for you to try out the demo and let us know what you think! Post your characters and/or conversation recordings below.
Notation as a Tool of Thought (1979) (Score: 158+ in 9 hours)
Link: https://readhacker.news/s/6tmx3
Comments: https://readhacker.news/c/6tmx3
Link: https://readhacker.news/s/6tmx3
Comments: https://readhacker.news/c/6tmx3
Avoiding Skill Atrophy in the Age of AI (Score: 150+ in 6 hours)
Link: https://readhacker.news/s/6tn8C
Comments: https://readhacker.news/c/6tn8C
Link: https://readhacker.news/s/6tn8C
Comments: https://readhacker.news/c/6tn8C
Substack
Avoiding Skill Atrophy in the Age of AI
How to use AI coding assistants without letting your hard-earned engineering skills wither away.
Writing "/etc/hosts" breaks the Substack editor (🔥 Score: 158+ in 1 hour)
Link: https://readhacker.news/s/6tnMg
Comments: https://readhacker.news/c/6tnMg
Link: https://readhacker.news/s/6tnMg
Comments: https://readhacker.news/c/6tnMg
Substack
When /etc/h*sts Breaks Your Substack Editor: An Adventure in Web Content Filtering
An exploration of web security mechanisms and their unexpected consequences
FBI arrests Wisconsin judge on charges of obstructing immigrant arrest (🔥 Score: 163+ in 34 minutes)
Link: https://readhacker.news/s/6tp82
Comments: https://readhacker.news/c/6tp82
Link: https://readhacker.news/s/6tp82
Comments: https://readhacker.news/c/6tp82