#typescript #chatbot #cot #graphrag #knowledge_graph #mysql #rag #serverless #vector_database
TiDB.AI is a free and open-source tool that helps you find information easily. It uses a Knowledge Graph built on top of TiDB Vector, LlamaIndex, and DSPy. You can use it to search for information in a conversational way, similar to talking to a person. It also allows you to edit the knowledge graph to make sure the information is accurate. You can even add a search widget to your website with just a few lines of code. This makes it easier for users to get quick answers to their questions, improving their overall experience.
https://github.com/pingcap/autoflow
TiDB.AI is a free and open-source tool that helps you find information easily. It uses a Knowledge Graph built on top of TiDB Vector, LlamaIndex, and DSPy. You can use it to search for information in a conversational way, similar to talking to a person. It also allows you to edit the knowledge graph to make sure the information is accurate. You can even add a search widget to your website with just a few lines of code. This makes it easier for users to get quick answers to their questions, improving their overall experience.
https://github.com/pingcap/autoflow
GitHub
GitHub - pingcap/autoflow: pingcap/autoflow is a Graph RAG based and conversational knowledge base tool built with TiDB Serverless…
pingcap/autoflow is a Graph RAG based and conversational knowledge base tool built with TiDB Serverless Vector Storage. Demo: https://tidb.ai - pingcap/autoflow
🔥2
#other #chatbot #hugging_face #llm #llm_local #llm_prompting #llm_security #llmops #machine_learning #open_ai #pathway #rag #real_time #retrieval_augmented_generation #vector_database #vector_index
Pathway's AI Pipelines help you quickly create and deploy AI applications with high accuracy. These pipelines use the latest knowledge from your data sources and offer ready-to-deploy templates for large language models. You can test these apps on your own machine and deploy them on cloud services like GCP, AWS, or Azure, or on-premises. The apps connect to various data sources such as file systems, Google Drive, and databases, and they include built-in data indexing for efficient searches. This makes it easy to extract and organize data from documents in real-time, reducing the need for separate infrastructure setups. This simplifies the process of building and maintaining AI applications, saving you time and effort.
https://github.com/pathwaycom/llm-app
Pathway's AI Pipelines help you quickly create and deploy AI applications with high accuracy. These pipelines use the latest knowledge from your data sources and offer ready-to-deploy templates for large language models. You can test these apps on your own machine and deploy them on cloud services like GCP, AWS, or Azure, or on-premises. The apps connect to various data sources such as file systems, Google Drive, and databases, and they include built-in data indexing for efficient searches. This makes it easy to extract and organize data from documents in real-time, reducing the need for separate infrastructure setups. This simplifies the process of building and maintaining AI applications, saving you time and effort.
https://github.com/pathwaycom/llm-app
GitHub
GitHub - pathwaycom/llm-app: Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data. 🐳Docker…
Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data. 🐳Docker-friendly.⚡Always in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs,...
👍2
#python #agent #ai #aiagent #application #chatbots #chatgpt #embeddings #llm #long_term_memory #memory #memory_management #python #rag #state_management #vector_database
Mem0 is a special tool that helps AI systems remember things. It makes AI interactions more personal and efficient by storing user preferences and past conversations. This means you don't have to repeat information, and the AI can give better answers based on what it knows about you. Mem0 also saves money by only sending important data to AI models, reducing costs up to 80%. It's easy to use and works with popular AI platforms like OpenAI and Claude.
https://github.com/mem0ai/mem0
Mem0 is a special tool that helps AI systems remember things. It makes AI interactions more personal and efficient by storing user preferences and past conversations. This means you don't have to repeat information, and the AI can give better answers based on what it knows about you. Mem0 also saves money by only sending important data to AI models, reducing costs up to 80%. It's easy to use and works with popular AI platforms like OpenAI and Claude.
https://github.com/mem0ai/mem0
GitHub
GitHub - mem0ai/mem0: Universal memory layer for AI Agents
Universal memory layer for AI Agents. Contribute to mem0ai/mem0 development by creating an account on GitHub.
#python #agents #knowledge_graph #llm #llm_agent #rag #search #search_agent #vector_database
Airweave is a tool that helps make information from apps and databases easily accessible to AI agents. It connects over 100 data sources with minimal coding, allowing for fast data synchronization and semantic search. This means you can quickly turn app data into useful knowledge for AI agents, making them smarter and more efficient. It's especially helpful for tasks like customer support or generating reports, as it ensures AI agents have the most accurate and up-to-date information.
https://github.com/airweave-ai/airweave
Airweave is a tool that helps make information from apps and databases easily accessible to AI agents. It connects over 100 data sources with minimal coding, allowing for fast data synchronization and semantic search. This means you can quickly turn app data into useful knowledge for AI agents, making them smarter and more efficient. It's especially helpful for tasks like customer support or generating reports, as it ensures AI agents have the most accurate and up-to-date information.
https://github.com/airweave-ai/airweave
GitHub
GitHub - airweave-ai/airweave: Context retrieval for AI agents across apps and databases
Context retrieval for AI agents across apps and databases - airweave-ai/airweave
#python #ai #ai_agents #ai_memory #cognitive_architecture #cognitive_memory #contributions_welcome #good_first_issue #good_first_pr #graph_database #graph_rag #graphrag #help_wanted #knowledge #knowledge_graph #neo4j #open_source #openai #rag #vector_database
Cognee is an open-source AI memory engine that helps improve how AI systems understand and process data. It mimics human cognitive processes, creating "memories" from various data types like text and images. This enhances the accuracy of large language models (LLMs) and allows them to recall past interactions and documents. Cognee is scalable, cost-effective, and integrates easily with existing systems, making it a valuable tool for developers seeking to boost AI performance without relying on expensive APIs.
https://github.com/topoteretes/cognee
Cognee is an open-source AI memory engine that helps improve how AI systems understand and process data. It mimics human cognitive processes, creating "memories" from various data types like text and images. This enhances the accuracy of large language models (LLMs) and allows them to recall past interactions and documents. Cognee is scalable, cost-effective, and integrates easily with existing systems, making it a valuable tool for developers seeking to boost AI performance without relying on expensive APIs.
https://github.com/topoteretes/cognee
GitHub
GitHub - topoteretes/cognee: Memory for AI Agents in 6 lines of code
Memory for AI Agents in 6 lines of code. Contribute to topoteretes/cognee development by creating an account on GitHub.
👍1
#java #anthropic #chatgpt #chroma #embeddings #gemini #gpt #huggingface #java #langchain #llama #milvus #ollama #onnx #openai #openai_api #pgvector #pinecone #vector_database #weaviate
LangChain4j helps you add powerful AI to your Java applications by making it easy to use Large Language Models (LLMs). It provides a simple way to switch between different LLMs and embedding stores without needing to learn each one's specific API. This means you can easily experiment with different models and tools, making your development process faster and more flexible. LangChain4j also offers many examples and tools to help you build complex AI applications quickly, such as chatbots and retrieval systems. This simplifies the integration of AI into your projects, allowing you to focus on creating better applications.
https://github.com/langchain4j/langchain4j
LangChain4j helps you add powerful AI to your Java applications by making it easy to use Large Language Models (LLMs). It provides a simple way to switch between different LLMs and embedding stores without needing to learn each one's specific API. This means you can easily experiment with different models and tools, making your development process faster and more flexible. LangChain4j also offers many examples and tools to help you build complex AI applications quickly, such as chatbots and retrieval systems. This simplifies the integration of AI into your projects, allowing you to focus on creating better applications.
https://github.com/langchain4j/langchain4j
GitHub
GitHub - langchain4j/langchain4j: LangChain4j is an open-source Java library that simplifies the integration of LLMs into Java…
LangChain4j is an open-source Java library that simplifies the integration of LLMs into Java applications through a unified API, providing access to popular LLMs and vector databases. It makes impl...
#typescript #agents #ai #embedders #genkit #llm #machine_learning #multimodal #rag #vector_database
Genkit is an open-source framework by Google Firebase that helps you easily build AI-powered apps using a single interface to connect many AI models like Google Gemini, OpenAI, and Anthropic. It supports JavaScript/TypeScript (stable), Go (beta), and Python (alpha), letting you create chatbots, automations, and recommendations quickly with simple code. Genkit works well with web and mobile platforms, offers tools for testing and debugging AI features locally, and lets you deploy and monitor your AI apps on Firebase or other cloud services. This saves you time and effort in developing and managing AI applications efficiently.
https://github.com/firebase/genkit
Genkit is an open-source framework by Google Firebase that helps you easily build AI-powered apps using a single interface to connect many AI models like Google Gemini, OpenAI, and Anthropic. It supports JavaScript/TypeScript (stable), Go (beta), and Python (alpha), letting you create chatbots, automations, and recommendations quickly with simple code. Genkit works well with web and mobile platforms, offers tools for testing and debugging AI features locally, and lets you deploy and monitor your AI apps on Firebase or other cloud services. This saves you time and effort in developing and managing AI applications efficiently.
https://github.com/firebase/genkit
GitHub
GitHub - firebase/genkit: Open-source framework for building AI-powered apps in JavaScript, Go, and Python, built and used in production…
Open-source framework for building AI-powered apps in JavaScript, Go, and Python, built and used in production by Google - firebase/genkit
#python #ai #context #embedded #faiss #knowledge_base #knowledge_graph #llm #machine_learning #memory #nlp #offline_first #opencv #python #rag #retrieval_augmented_generation #semantic_search #vector_database #video_processing
Memvid lets you store millions of text pieces inside a single MP4 video file using QR codes, making your data 50-100 times smaller than usual databases. You can search this video instantly in under 100 milliseconds without needing servers or internet after setup. It works offline, is easy to use with simple Python code, and supports PDFs and chat with your data. The upcoming version 2 will add features like continuous memory updates, shareable capsules, fast local caching, and better video compression, making your AI memory smarter, faster, and more flexible. This means you get a powerful, portable, and efficient way to manage and search huge knowledge bases quickly and easily.
https://github.com/Olow304/memvid
Memvid lets you store millions of text pieces inside a single MP4 video file using QR codes, making your data 50-100 times smaller than usual databases. You can search this video instantly in under 100 milliseconds without needing servers or internet after setup. It works offline, is easy to use with simple Python code, and supports PDFs and chat with your data. The upcoming version 2 will add features like continuous memory updates, shareable capsules, fast local caching, and better video compression, making your AI memory smarter, faster, and more flexible. This means you get a powerful, portable, and efficient way to manage and search huge knowledge bases quickly and easily.
https://github.com/Olow304/memvid
GitHub
GitHub - memvid/memvid: Memory layer for AI Agents. Replace complex RAG pipelines with a serverless, single-file memory layer.…
Memory layer for AI Agents. Replace complex RAG pipelines with a serverless, single-file memory layer. Give your agents instant retrieval and long-term memory. - memvid/memvid
#python #agent #context_engineering #electron #embedding_models #memory #proactive_ai #python #python3 #rag #react #vector_database #vision_language_model
MineContext is a special AI tool that helps you work more efficiently. It collects information from your computer screen and other sources, then uses this data to give you useful insights, summaries, and reminders. This helps you stay organized and focused on important tasks. MineContext is also very private because it stores all your data on your local device, not in the cloud. It's like having a personal assistant that helps you manage your digital life better.
https://github.com/volcengine/MineContext
MineContext is a special AI tool that helps you work more efficiently. It collects information from your computer screen and other sources, then uses this data to give you useful insights, summaries, and reminders. This helps you stay organized and focused on important tasks. MineContext is also very private because it stores all your data on your local device, not in the cloud. It's like having a personal assistant that helps you manage your digital life better.
https://github.com/volcengine/MineContext
GitHub
GitHub - volcengine/MineContext: MineContext is your proactive context-aware AI partner(Context-Engineering+ChatGPT Pulse)
MineContext is your proactive context-aware AI partner(Context-Engineering+ChatGPT Pulse) - volcengine/MineContext
#python #ai #faiss #gpt_oss #langchain #llama_index #llm #localstorage #offline_first #ollama #privacy #python #rag #retrieval_augmented_generation #vector_database #vector_search #vectors
LEANN is a tiny, powerful vector database that lets you turn your laptop into a personal AI assistant capable of searching millions of documents using 97% less storage than traditional systems without losing accuracy. It works by storing a compact graph and computing embeddings only when needed, saving huge space and keeping your data private on your device. You can search your files, emails, browser history, chat logs, live data from platforms like Slack and Twitter, and even codebases—all locally without cloud costs. This means fast, private, and efficient AI-powered search and retrieval on your own laptop.
https://github.com/yichuan-w/LEANN
LEANN is a tiny, powerful vector database that lets you turn your laptop into a personal AI assistant capable of searching millions of documents using 97% less storage than traditional systems without losing accuracy. It works by storing a compact graph and computing embeddings only when needed, saving huge space and keeping your data private on your device. You can search your files, emails, browser history, chat logs, live data from platforms like Slack and Twitter, and even codebases—all locally without cloud costs. This means fast, private, and efficient AI-powered search and retrieval on your own laptop.
https://github.com/yichuan-w/LEANN
GitHub
GitHub - yichuan-w/LEANN: RAG on Everything with LEANN. Enjoy 97% storage savings while running a fast, accurate, and 100% private…
RAG on Everything with LEANN. Enjoy 97% storage savings while running a fast, accurate, and 100% private RAG application on your personal device. - yichuan-w/LEANN