OpenTPU: Open-Source Reimplementation of Google Tensor Processing Unit (TPU)
9 by walterbell | 1 comments on Hacker News.
9 by walterbell | 1 comments on Hacker News.
Starship Flight 9 launches successfully, booster explodes on impact [video]
28 by onzeinternets | 12 comments on Hacker News.
28 by onzeinternets | 12 comments on Hacker News.
A UEFI app that sends LLDP-MED pkt at boot to negotiate PoE+ power before the OS
27 by pietrushnic | 0 comments on Hacker News.
27 by pietrushnic | 0 comments on Hacker News.
Look Ma, No Bubbles Designing a Low-Latency Megakernel for Llama-1B
29 by ljosifov | 3 comments on Hacker News.
29 by ljosifov | 3 comments on Hacker News.
Using Postgres pg_test_fsync tool for testing low latency writes
9 by mfiguiere | 0 comments on Hacker News.
9 by mfiguiere | 0 comments on Hacker News.
Show HN: AutoThink – Boosts local LLM performance by 43% with adaptive reasoning
19 by codelion | 2 comments on Hacker News.
I built AutoThink, a technique that makes local LLMs reason more efficiently by adaptively allocating computational resources based on query complexity. The core idea: instead of giving every query the same "thinking time," classify queries as HIGH or LOW complexity and allocate thinking tokens accordingly. Complex reasoning gets 70-90% of tokens, simple queries get 20-40%. I also implemented steering vectors derived from Pivotal Token Search (originally from Microsoft's Phi-4 paper) that guide the model's reasoning patterns during generation. These vectors encourage behaviors like numerical accuracy, self-correction, and thorough exploration. Results on DeepSeek-R1-Distill-Qwen-1.5B: - GPQA-Diamond: 31.06% vs 21.72% baseline (+43% relative improvement)- MMLU-Pro: 26.38% vs 25.58% baseline- Uses fewer tokens than baseline approaches Works with any local reasoning model - DeepSeek, Qwen, custom fine-tuned models. No API dependencies. The technique builds on two things I developed: an adaptive classification framework that can learn new complexity categories without retraining, and an open source implementation of Pivotal Token Search. Technical paper: https://ift.tt/WqaK0hF Code and examples: https://ift.tt/PUboNrj... PTS implementation: https://ift.tt/SnpeTvX I'm curious about your thoughts on adaptive resource allocation for AI reasoning. Have you tried similar approaches with your local models?
19 by codelion | 2 comments on Hacker News.
I built AutoThink, a technique that makes local LLMs reason more efficiently by adaptively allocating computational resources based on query complexity. The core idea: instead of giving every query the same "thinking time," classify queries as HIGH or LOW complexity and allocate thinking tokens accordingly. Complex reasoning gets 70-90% of tokens, simple queries get 20-40%. I also implemented steering vectors derived from Pivotal Token Search (originally from Microsoft's Phi-4 paper) that guide the model's reasoning patterns during generation. These vectors encourage behaviors like numerical accuracy, self-correction, and thorough exploration. Results on DeepSeek-R1-Distill-Qwen-1.5B: - GPQA-Diamond: 31.06% vs 21.72% baseline (+43% relative improvement)- MMLU-Pro: 26.38% vs 25.58% baseline- Uses fewer tokens than baseline approaches Works with any local reasoning model - DeepSeek, Qwen, custom fine-tuned models. No API dependencies. The technique builds on two things I developed: an adaptive classification framework that can learn new complexity categories without retraining, and an open source implementation of Pivotal Token Search. Technical paper: https://ift.tt/WqaK0hF Code and examples: https://ift.tt/PUboNrj... PTS implementation: https://ift.tt/SnpeTvX I'm curious about your thoughts on adaptive resource allocation for AI reasoning. Have you tried similar approaches with your local models?
Ask HN: Is anyone using AI conversation partners?
6 by rickcarlino | 1 comments on Hacker News.
I'm obsessed applying LLMs in language learning software. One thing I am not-so-obsessed with is a wave of conversational chat apps that many startups have begun offering. Having tried them myself, I find them to be quite bland ("Tell me about your day!" ) and often use a speech style that uses direct translation of English phrases into the target language. I see plenty of potential for LLMs in this space, but the conversation bhat bots I have seen so far are too open ended and seem half baked. For users: Is anyone finding these tools helpful? For the people building them: Are people actually returning to the product?
6 by rickcarlino | 1 comments on Hacker News.
I'm obsessed applying LLMs in language learning software. One thing I am not-so-obsessed with is a wave of conversational chat apps that many startups have begun offering. Having tried them myself, I find them to be quite bland ("Tell me about your day!" ) and often use a speech style that uses direct translation of English phrases into the target language. I see plenty of potential for LLMs in this space, but the conversation bhat bots I have seen so far are too open ended and seem half baked. For users: Is anyone finding these tools helpful? For the people building them: Are people actually returning to the product?
Show HN: Connecting People Through AI-Powered Video Sentiment Matching
5 by armini | 0 comments on Hacker News.
Hi HN, I’d like to share www.kuky.com, a peer support network that connects people through short, self-recorded videos and matches them using sentiment analysis powered by large language models (LLMs). We’re building Kuky to help users find others who genuinely understand their emotional journey—not through swiping or likes, but through shared human stories. In this short Loom demo (link above), I walk through how: Users create a profile by uploading 3 videos: an intro, their mental health journey, and their likes/dislikes LLMs analyze each video for emotional tone, key themes, and psychological markers Based on this, Kuky intelligently connects users with similar experiences and emotional alignment We're passionate about creating a safe, empathetic space for authentic conversation—especially for people dealing with mental health challenges. For more background, here’s a feature on Kuky in Women Love Tech: https://ift.tt/EZoSD3v... Would love your thoughts on the concept, the matching algorithm, and how you’d imagine using something like this. Thanks,Armin
5 by armini | 0 comments on Hacker News.
Hi HN, I’d like to share www.kuky.com, a peer support network that connects people through short, self-recorded videos and matches them using sentiment analysis powered by large language models (LLMs). We’re building Kuky to help users find others who genuinely understand their emotional journey—not through swiping or likes, but through shared human stories. In this short Loom demo (link above), I walk through how: Users create a profile by uploading 3 videos: an intro, their mental health journey, and their likes/dislikes LLMs analyze each video for emotional tone, key themes, and psychological markers Based on this, Kuky intelligently connects users with similar experiences and emotional alignment We're passionate about creating a safe, empathetic space for authentic conversation—especially for people dealing with mental health challenges. For more background, here’s a feature on Kuky in Women Love Tech: https://ift.tt/EZoSD3v... Would love your thoughts on the concept, the matching algorithm, and how you’d imagine using something like this. Thanks,Armin
Global high-performance proof-of-stake blockchain with erasure coding
4 by lawrenceyan | 2 comments on Hacker News.
4 by lawrenceyan | 2 comments on Hacker News.
Every wondered how Facebook spoofs Gmail message list snippet text?
19 by chrisjj | 10 comments on Hacker News.
E.g. Gmail inbox shows a message contains "XXX tagged you on Facebook. Take a look about what she said on you." But when you open the message, there's no "Take a look about what she said on you." Answer. The text is present but hidden: Take a look at what she said about you. And unsurprisingly whenever I do click through, I find she hasn't said anything about me.
19 by chrisjj | 10 comments on Hacker News.
E.g. Gmail inbox shows a message contains "XXX tagged you on Facebook. Take a look about what she said on you." But when you open the message, there's no "Take a look about what she said on you." Answer. The text is present but hidden: Take a look at what she said about you. And unsurprisingly whenever I do click through, I find she hasn't said anything about me.
An Extreme Cousin for Pluto? Possible Dwarf Planet at Solar System Edge
5 by raattgift | 0 comments on Hacker News.
5 by raattgift | 0 comments on Hacker News.
As a developer, my most important tools are a pen and a notebook
21 by ingve | 20 comments on Hacker News.
21 by ingve | 20 comments on Hacker News.
Another way electric cars clean the air: study says brake dust reduced by 83%
29 by xbmcuser | 20 comments on Hacker News.
29 by xbmcuser | 20 comments on Hacker News.
Why are 2025/05/28 and 2025-05-28 different days in JavaScript?
5 by brandon_bot | 1 comments on Hacker News.
5 by brandon_bot | 1 comments on Hacker News.