Stuff
3 subscribers
198K links
Download Telegram
Doge Worker's Code Supports NLRB Whistleblower
123 by todsacerdoti | 23 comments on Hacker News.
MCP on AWS Lambda with MCPEngine
10 by simba-k | 0 comments on Hacker News.
Yagri: You are gonna read it
19 by escot | 2 comments on Hacker News.
FontDiffuser: Text to Font
7 by SubiculumCode | 2 comments on Hacker News.
Don't make it "like Google"
23 by nativeit | 21 comments on Hacker News.
Show HN: CocoIndex – Open-Source Data framework for AI, built for data freshness
3 by badmonster | 0 comments on Hacker News.
Hi HN, I’ve been working on CocoIndex, an open-source Data ETL framework to transform data for AI, optimized for data freshness. You can start a CocoIndex project with `pip install cocoindex` and declare a data flow that can build ETL like LEGO - build a RAG pipeline for vector embeddings, knowledge graphs, or extract, transform data with LLMs. It is a data processing framework beyond text. When you run the data flow either with live mode or batch mode, it will process the data incrementally with minimal recomputation and make it super fast to update the target stores on source changes. Get started video: https://www.youtube.com/watch?v=gv5R8nOXsWU Demo video: https://www.youtube.com/watch?v=ZnmyoHslBSc Previously, I’ve worked at Google on projects like search indexing and ETL infra for 8 years. After I left Google last year, I built various projects and went through pivoting hell. In all the projects I’ve built, data still sits in the center of the problem and I find myself focusing on building data infra other than the business logic I need for data transformation. The current prepackaged RAG-as-service doesn't serve my needs, because I need to choose a different strategy for the context, and I also need deduplication, clustering (items are related), and other custom features that are commonly needed. That’s where CocoIndex starts. A simple philosophy behind it - data transformation is similar to formulas in spreadsheets. The ground of truth is at the source data, and all the steps to transform, and final target store are derived data, and should be reactive based on the source change. If you use CocoIndex, you only need to worry about defining transformations like formulas. *Data flow paradigm* came in as an immediate choice - because there’s no side effect, lineage and observability just come out of the box. *Incremental processing* - If you are a data expert, an analogy would be a materialized view beyond SQL. The framework relies on static sharding to track the pipeline states, and only reprocessing necessary portions. When data has changed,framework handles the change data capture comprehensively and combines the mechanism for push and pull. Then clear stale derived data/versions and re-index data based on tracking data/logic changes or data TTL settings. There’s lots of edge cases to do it right, for example, when a row is referenced in other places, and the row changes. These should be handled at the level of the framework. *At the compute engine level* - the framework should consider the multiple processes and concurrent updates. It should consider how to resume existing states from terminated execution. In the end, we want to build a framework that is easy to build with exceptional velocity, but scalable and robust in production. *Standardized the interface throughout the data flow* - really easy to plugin custom logic like LEGO; with a variety of native built-in components. One example is that it takes a few lines to switch among Qdrant, Postgres, Neo4j. CocoIndex is licensed under Apache 2.0 https://ift.tt/snOo6vH Getting started: https://ift.tt/Zu5TGjh Excited to learn your thoughts, and thank you so much!Linghua
Yaakov Kirschen’s other legacy
10 by Kirkman14 | 1 comments on Hacker News.
Efficient Code Search with Nvidia DGX
6 by simplesort | 0 comments on Hacker News.