Nick Clegg says asking artists for use permission would 'kill' AI industry
6 by olyellybelly | 5 comments on Hacker News.
6 by olyellybelly | 5 comments on Hacker News.
The cost of AI is being paid in deserts far from Silicon Valley
5 by donohoe | 0 comments on Hacker News.
5 by donohoe | 0 comments on Hacker News.
Where hyperscale hardware goes to retire: Ars visits a big ITAD site
8 by rntn | 0 comments on Hacker News.
8 by rntn | 0 comments on Hacker News.
German court sends VW execs to prison over Dieselgate scandal
4 by Tomte | 0 comments on Hacker News.
4 by Tomte | 0 comments on Hacker News.
Cloudflare CEO: Football Piracy Blocks Will Claim Lives; "I Pray No One Dies"
18 by reynaldi | 3 comments on Hacker News.
18 by reynaldi | 3 comments on Hacker News.
Mathpad: A mathematical keypad for students and professionals
11 by todsacerdoti | 5 comments on Hacker News.
11 by todsacerdoti | 5 comments on Hacker News.
Emilua is an execution engine. As a runtime for your Lua programs
9 by delduca | 1 comments on Hacker News.
9 by delduca | 1 comments on Hacker News.
Ask HN: Building LLM apps? How are you handling user context?
10 by marcospassos | 2 comments on Hacker News.
I've been building stuff with LLMs, and every time I need user context, I end up manually wiring up a context pipeline. Sure, the model can reason and answer questions well, but it has zero idea who the user is, where they came from, or what they've been doing in the app. Without that, I either have to make the model ask awkward initial questions to figure it out or let it guess, which is usually wrong. So I keep rebuilding the same setup: tracking events, enriching sessions, summarizing behavior, and injecting that into prompts. It makes the app way more helpful, but it's a pain. What I wish existed is a simple way to grab a session summary or user context I could just drop into a prompt. Something like: const context = await getContext(); const response = await generateText({ system: `Here's the user context: ${context}`, messages: [...]}); Some examples of how I use this: - For support, I pass in the docs they viewed or the error page they landed on. - For marketing, I summarize their journey, like 'ad clicked' → 'blog post read' → 'pricing page'. - For sales, I highlight behavior that suggests whether they're a startup or an enterprise. - For product, I classify the session as 'confused', 'exploring plans', or 'ready to buy'. - For recommendations, I generate embeddings from recent activity and use that to match content or products more accurately. In all of these cases, I usually inject things like recent activity, timezone, currency, traffic source, and any signals I can gather that help guide the experience. Has anyone else run into this same issue? Found a better way? I'm considering building something around this initially to solve my problem. I'd love to hear how others are handling it or if this sounds useful to you.
10 by marcospassos | 2 comments on Hacker News.
I've been building stuff with LLMs, and every time I need user context, I end up manually wiring up a context pipeline. Sure, the model can reason and answer questions well, but it has zero idea who the user is, where they came from, or what they've been doing in the app. Without that, I either have to make the model ask awkward initial questions to figure it out or let it guess, which is usually wrong. So I keep rebuilding the same setup: tracking events, enriching sessions, summarizing behavior, and injecting that into prompts. It makes the app way more helpful, but it's a pain. What I wish existed is a simple way to grab a session summary or user context I could just drop into a prompt. Something like: const context = await getContext(); const response = await generateText({ system: `Here's the user context: ${context}`, messages: [...]}); Some examples of how I use this: - For support, I pass in the docs they viewed or the error page they landed on. - For marketing, I summarize their journey, like 'ad clicked' → 'blog post read' → 'pricing page'. - For sales, I highlight behavior that suggests whether they're a startup or an enterprise. - For product, I classify the session as 'confused', 'exploring plans', or 'ready to buy'. - For recommendations, I generate embeddings from recent activity and use that to match content or products more accurately. In all of these cases, I usually inject things like recent activity, timezone, currency, traffic source, and any signals I can gather that help guide the experience. Has anyone else run into this same issue? Found a better way? I'm considering building something around this initially to solve my problem. I'd love to hear how others are handling it or if this sounds useful to you.
Launch HN: Nomi (YC X25) – Cursor for Sales
4 by ethansafar | 0 comments on Hacker News.
Hey HN, we’re Swan and Ethan, and we’re building https://heynomi.com , a real-time AI that helps you sell better while you're actually in the call.Demo: https://youtu.be/XFxDCP8jdY8?si=CGnPM1zT4wxAvadJ Most of us aren’t trained in sales. We weren’t either. But in the early days, it’s the founders who have to sell, and learning that on live calls is brutal. After one particularly painful deal we lost, we joked that we needed an AI cofounder who could talk in our ear and save us from ourselves. That joke turned into a prototype, then a product. Nomi joins your video calls and gives you phrase suggestions when it matters most:– when someone pushes back,– when there’s a hidden signal worth digging into,– when it’s time to close (and how to do it without sounding pushy). Then after the call, it auto-generates clean CRM notes, action items, and sends you a short email breaking down what went well, what didn’t, and how to improve next time—based on your actual conversation. We’re also rolling out some features to make every call a learning opportunity and unlock revenue potential:– A/B testing different sets of objections during calls and comparing results in real time– Auto-update of your company’s sales playbook based on what’s actually working– Upsell opportunity spotting and predictive revenue estimates driven by what people say on calls The real-time part was the hardest. If advice shows up 2 seconds late, or it's off-topic, it's worse than useless. So we built a system with:- a Thinking Model to track the call’s momentum;- a tactic selector trained with reinforcement learning;- a lightweight LLM (boosted with RAG) that delivers custom phrase suggestions under 500 ms. Each user gets a private copilot trained on their own calls (with permission), plus simulated data and sales best practices. It gets sharper with every interaction, no manual tuning needed. Right now, we’re live with 30 teams. One company went from $200K to $360K in just a few weeks. Another brought on a new rep who closed their first deal with Nomi on week one. We also offer a free AI note-taker and free sales-coaching post-call emails. Just shoot us an email if you want to try it: founders@heynomi.com We’re launching on HN to meet other folks who’ve felt this pain, founders doing sales, builders figuring things out on the fly. If that’s you, we’d love your feedback. Or if you just want to geek out about fast-inference LLMs, streaming RAG, or real-time UX, happy to go deep. Let us know what you think!
4 by ethansafar | 0 comments on Hacker News.
Hey HN, we’re Swan and Ethan, and we’re building https://heynomi.com , a real-time AI that helps you sell better while you're actually in the call.Demo: https://youtu.be/XFxDCP8jdY8?si=CGnPM1zT4wxAvadJ Most of us aren’t trained in sales. We weren’t either. But in the early days, it’s the founders who have to sell, and learning that on live calls is brutal. After one particularly painful deal we lost, we joked that we needed an AI cofounder who could talk in our ear and save us from ourselves. That joke turned into a prototype, then a product. Nomi joins your video calls and gives you phrase suggestions when it matters most:– when someone pushes back,– when there’s a hidden signal worth digging into,– when it’s time to close (and how to do it without sounding pushy). Then after the call, it auto-generates clean CRM notes, action items, and sends you a short email breaking down what went well, what didn’t, and how to improve next time—based on your actual conversation. We’re also rolling out some features to make every call a learning opportunity and unlock revenue potential:– A/B testing different sets of objections during calls and comparing results in real time– Auto-update of your company’s sales playbook based on what’s actually working– Upsell opportunity spotting and predictive revenue estimates driven by what people say on calls The real-time part was the hardest. If advice shows up 2 seconds late, or it's off-topic, it's worse than useless. So we built a system with:- a Thinking Model to track the call’s momentum;- a tactic selector trained with reinforcement learning;- a lightweight LLM (boosted with RAG) that delivers custom phrase suggestions under 500 ms. Each user gets a private copilot trained on their own calls (with permission), plus simulated data and sales best practices. It gets sharper with every interaction, no manual tuning needed. Right now, we’re live with 30 teams. One company went from $200K to $360K in just a few weeks. Another brought on a new rep who closed their first deal with Nomi on week one. We also offer a free AI note-taker and free sales-coaching post-call emails. Just shoot us an email if you want to try it: founders@heynomi.com We’re launching on HN to meet other folks who’ve felt this pain, founders doing sales, builders figuring things out on the fly. If that’s you, we’d love your feedback. Or if you just want to geek out about fast-inference LLMs, streaming RAG, or real-time UX, happy to go deep. Let us know what you think!
WavePhoenix – open-source implementation of the Nintendo WaveBird protocol
11 by zdw | 0 comments on Hacker News.
11 by zdw | 0 comments on Hacker News.
Data breach exposes 184M passwords for Google,Microsoft,Facebook
20 by taubek | 3 comments on Hacker News.
20 by taubek | 3 comments on Hacker News.