From the course: Advanced RAG Applications with Vector Databases

Unlock the full course today

Join today to access over 24,700 courses taught by industry experts.

Demo: Adding the LLM

Demo: Adding the LLM

- [Instructor] The final part of creating a RAG application on top of our vector store is to add the LLM. For this part, you'll need access to an LLM. You can do this in the form of an API key from OctoAI, OpenAI, or some other LLM provider. Alternatively, you can run your own LLM locally. This course assumes that you are using an OpenAI API key. We kick off our LLM access by importing our environment variables and loading them using Python-dotenv's load_dotenv method. Then, we import OpenAI from langchain_openai, and initialize this as our LLM. Next, we create a prompt template for our chat. The main thing to pay attention to in the prompt creation is that we use it to pass the question and the context via brackets, just like we would with an f-stringing in Python. Once we create a prompt string, we can use the ChatPromptTemplate object from langchain to create a prompt template. We need two more imports to create our chain. The RunnablePassthrough object takes a string and lets us…

Contents