GitXplorerGitXplorer
s

RAG_local_tutorial

public
15 stars
6 forks
0 issues

Commits

List of commits on branch main.
Verified
ad45ce43d5e28cd79413ba10c50fdd9345e18e32

Update README.md

ssergiopaniego committed 6 months ago
Unverified
9e78502c19aa5216086e21657ac5641c992ffd4f

Updated examples

ssergiopaniego committed 7 months ago
Unverified
0cccf256571d3c09a7248187b59cd1d6edd0e585

Updated diagram

ssergiopaniego committed 7 months ago
Unverified
9e08529ad9ceaf4f4a86e9f7cb45d1df0ea55e84

Added image

ssergiopaniego committed 7 months ago
Unverified
86346833f1f8adf695aa02abd5b613f6473fbc39

Updated icons

ssergiopaniego committed 7 months ago
Unverified
99c35e9c837755d5474ceed15a063b3868bc2e96

Updated notebook

ssergiopaniego committed 7 months ago

README

The README file for this repository.

Tutorials for RAG usage with an LLM locally or in Google Colab

Simple RAG tutorials that can be run locally with an LLM or using Google Colab (only Pro version).

These notebooks can be executed locally or in Google Colab. Either way, you have to install Ollama to run it.

RAG diagram

Tutorials

Technologies used

For these tutorials, we use LangChain, LlamaIndex, and HuggingFace for generating the RAG application code, Ollama for serving the LLM model, and a Jupyter or Google Colab notebook.

Langchain Logo LlamaIndex Logo HuggingFace Logo Ollama Logo Jupyter Logo Google Colab Logo

Intructions to run the example locally

  • Download and install Ollama:

Go to this URL and install it: https://ollama.com/download

  • Pull the LLM model. In this case, llama3:
ollama pull llama3

More details about llama3 in the official release blog and in Ollama documentation.

Intructions to run the example using Google Colab (Pro account needed)

  • Install Ollama from the command line:

(Press the button on the bottom-left part of the notebook to open a Terminal)

curl -fsSL https://ollama.com/install.sh | sh
  • Pull the LLM model. In this case, llama3
ollama serve & ollama pull llama3
  • Serve the model locally so the code can access it.
ollama serve & ollama run llama3

If an error is raised related to docarray, refer to this solution: https://stackoverflow.com/questions/76880224/error-using-using-docarrayinmemorysearch-in-langchain-could-not-import-docarray