Hacker News
24K subscribers
117K links
Top stories from https://news.ycombinator.com (with 100+ score)
Contribute to the development here: https://github.com/phil-r/hackernewsbot
Also check https://t.me/designer_news

Contacts: @philr
Download Telegram
Show HN: Clippy – 90s UI for local LLMs (🔥 Score: 157+ in 1 hour)

Link: https://readhacker.news/s/6u2CG
Comments: https://readhacker.news/c/6u2CG
Launch HN: Exa (YC S21) – The web as a database (🔥 Score: 153+ in 3 hours)

Link: https://readhacker.news/c/6u2UK

Hey HN! We’re Will and Jeff from Exa (https://exa.ai). We recently launched Exa Websets, an embeddings-powered search engine designed to return exactly what you’re asking for. You can get precise results for complex queries like “all startups working on open-source developer tools based in SF, founded 2021-2025”.
Demo here - https://youtu.be/Unt8hJmCxd4
We started working on Exa because we were frustrated that while LLM state-of-the-art is advancing every week, Google has gotten worse over time. The Internet used to feel like a magical information portal, but it doesn’t feel that way anymore when you’re constantly being pushed towards SEO-optimized clickbait.
Websets is a step in the opposite direction. For every search, we perform dozens of embedding searches over Exa’s vector database of the web to find good search candidates, then we run agentic workflows on each result to verify they match exactly what you asked for.
Websets results are good for two reasons. First, we train custom embedding models for our main search algorithm, instead of typical keyword matching search algorithms. Our embeddings models are trained specifically to return exactly the type of entity you ask for. In practice, that means if you search “startups working in nanotech”, keyword-based search engines return listicles about nanotech startups, because these listicles match the keywords in the query. In contrast, our embedding models return actual startup homepages, because these startup homepages match the meaning of the query.
The second is that LLMs provide the last-mile intelligence needed to verify every result. Each result and piece of data is backed with supporting references that we used to validate that the result is actually a match for your search criteria. That’s why Websets can take minutes or even hours to run, depending on your query and how many results you ask for. For valuable search queries, we think this is worth it.
Also notably, Websets are tables, not lists. You can add “enrichment” columns to find more information about each result, like “# of employees” or “does author have blog?”, and the cells asynchronously load in. This table format hopefully makes the web feel more like a database.
A few examples of searches that work with Websets:
- “Math blogs created by teachers from outside the US”: https://websets.exa.ai/cma1oz9xf007sis0ipzxgbamn
- "research paper about ways to avoid the O(n^2) attention problem in transformers, where one of the first author's first name starts with "A","B", "S", or "T", and it was written between 2018 and 2022”: https://websets.exa.ai/cm7dpml8c001ylnymum4sp11h
- “US based healthcare companies, with over 100 employees and a technical founder": https://websets.exa.ai/cm6lc0dlk004ilecmzej76qx2
- “all software engineers in the Bay Area, with experience in startups, who know Rust and have published technical content before”: https://youtu.be/knjrlm1aibQ
You can try it at https://websets.exa.ai/ and API docs are at https://docs.exa.ai/websets. We’d love to hear your feedback!
Accents in latent spaces: How AI hears accent strength in English (Score: 150+ in 7 hours)

Link: https://readhacker.news/s/6u2rd
Comments: https://readhacker.news/c/6u2rd
Show HN: Sheet Music in Smart Glasses (Score: 152+ in 12 hours)

Link: https://readhacker.news/c/6u2MC

Hi everyone, my name is Kevin Lin, and this is a Show HN for my sheet music smart glasses project. My video was on the front page on Friday: https://news.ycombinator.com/item?id=43876243, but dang said we should do a Show HN as well, so here goes!
I’ve wanted to put sheet music into smart glasses for a long time, but the perfect opportunity to execute came in mid-February, when Mentra (YC W25) tweeted about a smart glasses hackathon they were hosting - winners would get to take home a pair. I went, had a blast making a bunch of music-related apps with my teammate, and we won, so I got to take them home, refine the project, and make a pretty cool video about it (https://www.youtube.com/watch?v=j36u2i7PKKE).
The glasses are Even Realities G1s. They look normal, but they have two microphones, a screen in each lens, and can be even made with a prescription. Every person I’ve met who tried them on was surprised at how good the display is, and the video recordings of them unfortunately don’t do them justice.
The software runs on AugmentOS, which is Mentra’s smart glasses operating system that works on various 3rd-party smart glasses, including the G1s. All I had to do to make an app was write and run a typescript file using the AugmentOS SDK. This gives you the voice transcription and raw audio as input, and text or bitmaps available as output to the screens, everything else is completely abstracted away. Your glasses communicate with an AugmentOS app, and then the app communicates with your typescript service.
The only hard part was creating a Python script to turn sheet music (MusicXML format) into small, optimized bitmaps to display on the screens. To start, the existing landscape of music-related Python libraries is pretty poorly documented and I ran into multiple never-before-seen error messages. Downscaling to the small size of the glasses screens also meant that stems and staff lines were disappearing, so I thought to use morphological dilation to emphasize those without making the notes unintelligible. The final pipeline was MusicXML -> music21 library to render chunks of bars to png -> dilate with opencv- > downscale -> convert to bitmap with Pillow -> optimize bitmaps with imagemagick. This is far from the best code I’ve ever written, but the LLMs attempt at this whole task was abysmal and my years of Python experience really got to shine here. The code is on GitHub: https://github.com/kevinlinxc/AugmentedChords.
Putting it together, my typescript service serves these bitmaps locally when requested. I put together a UI where I can navigate menus and sheet music with voice commands (e.g. show catalog, next, select, start, exit, pause) and then I connected foot pedals to my laptop. Because of bitmap sending latency (~3s right now, but future glasses will do better), using foot pedals to turn the bars while playing wasn’t viable, so I instead had one of my pedals toggle autoscrolling, and the other two pedals sped up/temporarily paused the scrolling.
After lots of adjustments, I was able to play a full song using just the glasses! It took many takes and there was definitely lots of room for improvement. For example: - Bitmap sending is pretty slow, which is why using the foot pedals to turn bars wasn’t viable. - The resolution is pretty small, I would love to put more bars in at once so I can flip less frequently. - Since foot pedals aren’t portable, it would be cool to have a mode where the audio dictates when the sheet music changes. I tried implementing that with FFT but it was often wrong and more effort is needed. Head tilt controls would be cool too, because full manual control is a hard requirement for practicing.
All of these pain points are being targeted by Mentra and other companies competing in the space, and so I’m super excited to see the next generation! Also, feel free to ask me anything!
iOS Kindle app now has a ‘get book’ button after changes to App Store rules (Score: 151+ in 7 hours)

Link: https://readhacker.news/s/6u3K2
Comments: https://readhacker.news/c/6u3K2