Skip to content

A customer support chatbot that uses SQLite for its knowledge base, text embeddings for semantic search, and Groq's Llama 3-70B for response generation.

Notifications You must be signed in to change notification settings

ProjectProRepo/How-to-Build-RAG-Pipeline-from-scratch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

How to Build an RAG Pipeline from Scratch

This repository provides a step-by-step guide to building a Retrieval-Augmented Generation (RAG) pipeline from scratch. The pipeline enables efficient document retrieval and AI-powered response generation using FastEmbed, SQLite, and Groq AI.

Features

  • Document Ingestion: Process and store text-based FAQs in a structured format.
  • Vector Embeddings: Convert text into numerical embeddings for similarity search.
  • Database Storage: Store and retrieve embeddings using SQLite.
  • Semantic Search: Find the most relevant FAQ entries based on user queries.
  • AI-Powered Answers: Generate customer support responses using Groq AI.

Tech Stack

  • Python 3.9+

  • FastEmbed (for embedding generation)

  • SQLite (for database storage)

  • Groq AI (for response generation)

    For a detailed explanation of the code in the repository, read the full article on How to Build Generative AI Applications

About

A customer support chatbot that uses SQLite for its knowledge base, text embeddings for semantic search, and Groq's Llama 3-70B for response generation.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published