First Successful Lightning Triggering and Guiding Using a Drone
13 by gnabgib | 5 comments on Hacker News.
13 by gnabgib | 5 comments on Hacker News.
Show HN: Nowrite.fun – If you stop typing, your text disappears
5 by vseplet | 2 comments on Hacker News.
Hey HN, I built a small experimental writing tool called nowrite.fun. The concept is brutally simple:You set a timer (e.g. 5 minutes), start writing — and if you stop typing, your text vanishes. No drafts, no recovery, no forgiveness. Inspired by apps like Write or Die, but rebuilt from scratch with a lightweight stack: Deno + TypeScript, XState, anime.js, and canvas-confetti. Hosted on Deno Deploy. It’s a tool for short, focused writing sprints — a tweet, a blog paragraph, a newsletter blurb.Keep typing → you win.Hesitate → it’s gone. It’s minimal, a bit stressful, and surprisingly motivating. I also posted it on Product Hunt to see if this kind of weird, borderline masochistic UX resonates with anyone: https://ift.tt/ZG3Q1CX If you find it fun or useful, I’d really appreciate an upvote! Would love your thoughts, feature ideas, or performance roasts. Try it here: https://nowrite.fun
5 by vseplet | 2 comments on Hacker News.
Hey HN, I built a small experimental writing tool called nowrite.fun. The concept is brutally simple:You set a timer (e.g. 5 minutes), start writing — and if you stop typing, your text vanishes. No drafts, no recovery, no forgiveness. Inspired by apps like Write or Die, but rebuilt from scratch with a lightweight stack: Deno + TypeScript, XState, anime.js, and canvas-confetti. Hosted on Deno Deploy. It’s a tool for short, focused writing sprints — a tweet, a blog paragraph, a newsletter blurb.Keep typing → you win.Hesitate → it’s gone. It’s minimal, a bit stressful, and surprisingly motivating. I also posted it on Product Hunt to see if this kind of weird, borderline masochistic UX resonates with anyone: https://ift.tt/ZG3Q1CX If you find it fun or useful, I’d really appreciate an upvote! Would love your thoughts, feature ideas, or performance roasts. Try it here: https://nowrite.fun
Show HN: Index – new SOTA Open Source browser agent
17 by skull8888888 | 0 comments on Hacker News.
Hey HN, Robert from Laminar (lmnr.ai) here. We built Index - new SOTA Open Source browser agent. It reached 92% on WebVoyager with Claude 3.7 (extended thinking). o1 was used as a judge, also we manually double checked the judge. At the core is same old idea - run simple JS script in the browser to identify interactable elements -> draw bounding boxes around them on a screenshot of a browser window -> feed it to the LLM. What made Index so good: 1. We essentially created browser agent observability. We patched Playwright to record the entire browser session while the agent operates, simultaneously tracing all agent steps and LLM calls. Then we synchronized everything in the UI, creating an unparalleled debugging experience. This allowed us to pinpoint exactly where the agent fails by seeing what it "sees" in session replay alongside execution traces. 2. Our detection script is simple but extremely good. It's carefully crafted via trial and error. We also employed CV and OCR. 3. Agent is very simple, literally just a while loop. All power comes from carefully crafted prompt and ton of eval runs. Index is a simple python package. It also comes with a beautiful CLI. pip install lmnr-index playwright install chromium index run We've recently added o4-mini, Gemini 2.5 Pro and Flash. Pro is extremely good and fast . Give it a try via CLI. You can also use index via serverless API. ( https://ift.tt/9w3jMlR ) Or via chat UI - https://lmnr.ai/chat . To learn more about browser agent observability and evals check out open-source repo ( https://ift.tt/QLTblsz ) and our docs ( https://ift.tt/r45O9i3 ).
17 by skull8888888 | 0 comments on Hacker News.
Hey HN, Robert from Laminar (lmnr.ai) here. We built Index - new SOTA Open Source browser agent. It reached 92% on WebVoyager with Claude 3.7 (extended thinking). o1 was used as a judge, also we manually double checked the judge. At the core is same old idea - run simple JS script in the browser to identify interactable elements -> draw bounding boxes around them on a screenshot of a browser window -> feed it to the LLM. What made Index so good: 1. We essentially created browser agent observability. We patched Playwright to record the entire browser session while the agent operates, simultaneously tracing all agent steps and LLM calls. Then we synchronized everything in the UI, creating an unparalleled debugging experience. This allowed us to pinpoint exactly where the agent fails by seeing what it "sees" in session replay alongside execution traces. 2. Our detection script is simple but extremely good. It's carefully crafted via trial and error. We also employed CV and OCR. 3. Agent is very simple, literally just a while loop. All power comes from carefully crafted prompt and ton of eval runs. Index is a simple python package. It also comes with a beautiful CLI. pip install lmnr-index playwright install chromium index run We've recently added o4-mini, Gemini 2.5 Pro and Flash. Pro is extremely good and fast . Give it a try via CLI. You can also use index via serverless API. ( https://ift.tt/9w3jMlR ) Or via chat UI - https://lmnr.ai/chat . To learn more about browser agent observability and evals check out open-source repo ( https://ift.tt/QLTblsz ) and our docs ( https://ift.tt/r45O9i3 ).
Google blocked Motorola use of Perplexity AI, witness says
24 by welpandthen | 6 comments on Hacker News.
24 by welpandthen | 6 comments on Hacker News.
Show HN: CocoIndex – Open-Source Data framework for AI, built for data freshness
3 by badmonster | 0 comments on Hacker News.
Hi HN, I’ve been working on CocoIndex, an open-source Data ETL framework to transform data for AI, optimized for data freshness. You can start a CocoIndex project with `pip install cocoindex` and declare a data flow that can build ETL like LEGO - build a RAG pipeline for vector embeddings, knowledge graphs, or extract, transform data with LLMs. It is a data processing framework beyond text. When you run the data flow either with live mode or batch mode, it will process the data incrementally with minimal recomputation and make it super fast to update the target stores on source changes. Get started video: https://www.youtube.com/watch?v=gv5R8nOXsWU Demo video: https://www.youtube.com/watch?v=ZnmyoHslBSc Previously, I’ve worked at Google on projects like search indexing and ETL infra for 8 years. After I left Google last year, I built various projects and went through pivoting hell. In all the projects I’ve built, data still sits in the center of the problem and I find myself focusing on building data infra other than the business logic I need for data transformation. The current prepackaged RAG-as-service doesn't serve my needs, because I need to choose a different strategy for the context, and I also need deduplication, clustering (items are related), and other custom features that are commonly needed. That’s where CocoIndex starts. A simple philosophy behind it - data transformation is similar to formulas in spreadsheets. The ground of truth is at the source data, and all the steps to transform, and final target store are derived data, and should be reactive based on the source change. If you use CocoIndex, you only need to worry about defining transformations like formulas. *Data flow paradigm* came in as an immediate choice - because there’s no side effect, lineage and observability just come out of the box. *Incremental processing* - If you are a data expert, an analogy would be a materialized view beyond SQL. The framework relies on static sharding to track the pipeline states, and only reprocessing necessary portions. When data has changed,framework handles the change data capture comprehensively and combines the mechanism for push and pull. Then clear stale derived data/versions and re-index data based on tracking data/logic changes or data TTL settings. There’s lots of edge cases to do it right, for example, when a row is referenced in other places, and the row changes. These should be handled at the level of the framework. *At the compute engine level* - the framework should consider the multiple processes and concurrent updates. It should consider how to resume existing states from terminated execution. In the end, we want to build a framework that is easy to build with exceptional velocity, but scalable and robust in production. *Standardized the interface throughout the data flow* - really easy to plugin custom logic like LEGO; with a variety of native built-in components. One example is that it takes a few lines to switch among Qdrant, Postgres, Neo4j. CocoIndex is licensed under Apache 2.0 https://ift.tt/snOo6vH Getting started: https://ift.tt/Zu5TGjh Excited to learn your thoughts, and thank you so much!Linghua
3 by badmonster | 0 comments on Hacker News.
Hi HN, I’ve been working on CocoIndex, an open-source Data ETL framework to transform data for AI, optimized for data freshness. You can start a CocoIndex project with `pip install cocoindex` and declare a data flow that can build ETL like LEGO - build a RAG pipeline for vector embeddings, knowledge graphs, or extract, transform data with LLMs. It is a data processing framework beyond text. When you run the data flow either with live mode or batch mode, it will process the data incrementally with minimal recomputation and make it super fast to update the target stores on source changes. Get started video: https://www.youtube.com/watch?v=gv5R8nOXsWU Demo video: https://www.youtube.com/watch?v=ZnmyoHslBSc Previously, I’ve worked at Google on projects like search indexing and ETL infra for 8 years. After I left Google last year, I built various projects and went through pivoting hell. In all the projects I’ve built, data still sits in the center of the problem and I find myself focusing on building data infra other than the business logic I need for data transformation. The current prepackaged RAG-as-service doesn't serve my needs, because I need to choose a different strategy for the context, and I also need deduplication, clustering (items are related), and other custom features that are commonly needed. That’s where CocoIndex starts. A simple philosophy behind it - data transformation is similar to formulas in spreadsheets. The ground of truth is at the source data, and all the steps to transform, and final target store are derived data, and should be reactive based on the source change. If you use CocoIndex, you only need to worry about defining transformations like formulas. *Data flow paradigm* came in as an immediate choice - because there’s no side effect, lineage and observability just come out of the box. *Incremental processing* - If you are a data expert, an analogy would be a materialized view beyond SQL. The framework relies on static sharding to track the pipeline states, and only reprocessing necessary portions. When data has changed,framework handles the change data capture comprehensively and combines the mechanism for push and pull. Then clear stale derived data/versions and re-index data based on tracking data/logic changes or data TTL settings. There’s lots of edge cases to do it right, for example, when a row is referenced in other places, and the row changes. These should be handled at the level of the framework. *At the compute engine level* - the framework should consider the multiple processes and concurrent updates. It should consider how to resume existing states from terminated execution. In the end, we want to build a framework that is easy to build with exceptional velocity, but scalable and robust in production. *Standardized the interface throughout the data flow* - really easy to plugin custom logic like LEGO; with a variety of native built-in components. One example is that it takes a few lines to switch among Qdrant, Postgres, Neo4j. CocoIndex is licensed under Apache 2.0 https://ift.tt/snOo6vH Getting started: https://ift.tt/Zu5TGjh Excited to learn your thoughts, and thank you so much!Linghua
Shortest walking tour to 81,998 bars in Korea – TSP solved in 178 days
15 by geeknews | 0 comments on Hacker News.
15 by geeknews | 0 comments on Hacker News.
Clinical trial: novel nutritional formula treats gut microbial overgrowth
4 by wglb | 1 comments on Hacker News.
4 by wglb | 1 comments on Hacker News.