Google blocked Motorola use of Perplexity AI, witness says
24 by welpandthen | 6 comments on Hacker News.
24 by welpandthen | 6 comments on Hacker News.
Show HN: CocoIndex – Open-Source Data framework for AI, built for data freshness
3 by badmonster | 0 comments on Hacker News.
Hi HN, I’ve been working on CocoIndex, an open-source Data ETL framework to transform data for AI, optimized for data freshness. You can start a CocoIndex project with `pip install cocoindex` and declare a data flow that can build ETL like LEGO - build a RAG pipeline for vector embeddings, knowledge graphs, or extract, transform data with LLMs. It is a data processing framework beyond text. When you run the data flow either with live mode or batch mode, it will process the data incrementally with minimal recomputation and make it super fast to update the target stores on source changes. Get started video: https://www.youtube.com/watch?v=gv5R8nOXsWU Demo video: https://www.youtube.com/watch?v=ZnmyoHslBSc Previously, I’ve worked at Google on projects like search indexing and ETL infra for 8 years. After I left Google last year, I built various projects and went through pivoting hell. In all the projects I’ve built, data still sits in the center of the problem and I find myself focusing on building data infra other than the business logic I need for data transformation. The current prepackaged RAG-as-service doesn't serve my needs, because I need to choose a different strategy for the context, and I also need deduplication, clustering (items are related), and other custom features that are commonly needed. That’s where CocoIndex starts. A simple philosophy behind it - data transformation is similar to formulas in spreadsheets. The ground of truth is at the source data, and all the steps to transform, and final target store are derived data, and should be reactive based on the source change. If you use CocoIndex, you only need to worry about defining transformations like formulas. *Data flow paradigm* came in as an immediate choice - because there’s no side effect, lineage and observability just come out of the box. *Incremental processing* - If you are a data expert, an analogy would be a materialized view beyond SQL. The framework relies on static sharding to track the pipeline states, and only reprocessing necessary portions. When data has changed,framework handles the change data capture comprehensively and combines the mechanism for push and pull. Then clear stale derived data/versions and re-index data based on tracking data/logic changes or data TTL settings. There’s lots of edge cases to do it right, for example, when a row is referenced in other places, and the row changes. These should be handled at the level of the framework. *At the compute engine level* - the framework should consider the multiple processes and concurrent updates. It should consider how to resume existing states from terminated execution. In the end, we want to build a framework that is easy to build with exceptional velocity, but scalable and robust in production. *Standardized the interface throughout the data flow* - really easy to plugin custom logic like LEGO; with a variety of native built-in components. One example is that it takes a few lines to switch among Qdrant, Postgres, Neo4j. CocoIndex is licensed under Apache 2.0 https://ift.tt/snOo6vH Getting started: https://ift.tt/Zu5TGjh Excited to learn your thoughts, and thank you so much!Linghua
3 by badmonster | 0 comments on Hacker News.
Hi HN, I’ve been working on CocoIndex, an open-source Data ETL framework to transform data for AI, optimized for data freshness. You can start a CocoIndex project with `pip install cocoindex` and declare a data flow that can build ETL like LEGO - build a RAG pipeline for vector embeddings, knowledge graphs, or extract, transform data with LLMs. It is a data processing framework beyond text. When you run the data flow either with live mode or batch mode, it will process the data incrementally with minimal recomputation and make it super fast to update the target stores on source changes. Get started video: https://www.youtube.com/watch?v=gv5R8nOXsWU Demo video: https://www.youtube.com/watch?v=ZnmyoHslBSc Previously, I’ve worked at Google on projects like search indexing and ETL infra for 8 years. After I left Google last year, I built various projects and went through pivoting hell. In all the projects I’ve built, data still sits in the center of the problem and I find myself focusing on building data infra other than the business logic I need for data transformation. The current prepackaged RAG-as-service doesn't serve my needs, because I need to choose a different strategy for the context, and I also need deduplication, clustering (items are related), and other custom features that are commonly needed. That’s where CocoIndex starts. A simple philosophy behind it - data transformation is similar to formulas in spreadsheets. The ground of truth is at the source data, and all the steps to transform, and final target store are derived data, and should be reactive based on the source change. If you use CocoIndex, you only need to worry about defining transformations like formulas. *Data flow paradigm* came in as an immediate choice - because there’s no side effect, lineage and observability just come out of the box. *Incremental processing* - If you are a data expert, an analogy would be a materialized view beyond SQL. The framework relies on static sharding to track the pipeline states, and only reprocessing necessary portions. When data has changed,framework handles the change data capture comprehensively and combines the mechanism for push and pull. Then clear stale derived data/versions and re-index data based on tracking data/logic changes or data TTL settings. There’s lots of edge cases to do it right, for example, when a row is referenced in other places, and the row changes. These should be handled at the level of the framework. *At the compute engine level* - the framework should consider the multiple processes and concurrent updates. It should consider how to resume existing states from terminated execution. In the end, we want to build a framework that is easy to build with exceptional velocity, but scalable and robust in production. *Standardized the interface throughout the data flow* - really easy to plugin custom logic like LEGO; with a variety of native built-in components. One example is that it takes a few lines to switch among Qdrant, Postgres, Neo4j. CocoIndex is licensed under Apache 2.0 https://ift.tt/snOo6vH Getting started: https://ift.tt/Zu5TGjh Excited to learn your thoughts, and thank you so much!Linghua
Shortest walking tour to 81,998 bars in Korea – TSP solved in 178 days
15 by geeknews | 0 comments on Hacker News.
15 by geeknews | 0 comments on Hacker News.
Clinical trial: novel nutritional formula treats gut microbial overgrowth
4 by wglb | 1 comments on Hacker News.
4 by wglb | 1 comments on Hacker News.
Dissecting a British wartime night vision tank periscope [video]
4 by michalpleban | 0 comments on Hacker News.
4 by michalpleban | 0 comments on Hacker News.
Ubuntu 25.10 Replaces GNU Coreutils with Rust Uutils
19 by donnachangstein | 6 comments on Hacker News.
19 by donnachangstein | 6 comments on Hacker News.
Woman who tricked her way into men-only Magic Circle allowed in
15 by cmsefton | 0 comments on Hacker News.
15 by cmsefton | 0 comments on Hacker News.