Projects with this topic
-
A Go-based RAG (Retrieval-Augmented Generation) server that exposes a Model Context Protocol (MCP) interface for LLM clients. It provides semantic document search (for user documents), agent memory management, and a knowledge graph - backed by PgVector or ChromaDB - enabling AI agents to store, retrieve, and interconnect knowledge across sessions.
Updated -
Provide FRCC DSIR the ability to securely, confidently, and efficiently answer questions informed from our team’s codebases. In places where confidence is low, it should be flagged as low to inform the person interacting with the system.
Updated -
This project demonstrates a Retrieval-Augmented Generation (RAG) system designed to answer questions based on a collection of enterprise documents (PDFs and TXT files). It leverages LangChain for orchestrating the query pipeline and LlamaIndex for indexing and retrieving relevant information. The system uses Google's Gemini 1.5 Flash for generating answers and Gemini Embeddings for semantic search. A Streamlit interface provides a user-friendly way to interact with the knowledge assistant.
Updated