RAG from the Ground Up with Python and Ollama

  Переглядів 16,076

Decoder

Decoder

День тому

Retrieval Augmented Generation (RAG) is the de facto technique for giving LLMs the ability to interact with any document or dataset, regardless of its size. Follow along as I cover how to parse and manipulate documents, explore how embeddings are used to describe abstract concepts, implement a simple yet powerful way to surface the most relevant parts of a document to a given query, and ultimately build a script that you can use to have a locally-hosted LLM engage your own documents.
Check out my other Ollama videos: • Get Started with Ollama
Links:
Code from video - decoder.sh/videos/rag-from-th...
Ollama Python library - github.com/ollama/ollama-python
Project Gutenberg - www.gutenberg.org
Nomic Embedding model (on ollama) - ollama.com/library/nomic-embe...
BGE Embedding model - huggingface.co/CompendiumLabs...
How to use a model from HF with Ollama - • Importing Open Source ...
Cosine Similarity - blog.gopenai.com/rag-for-ever...
Timestamps:
00:00 - Intro
00:26 - Environment Setup
00:49 - Function review
01:50 - Source Document
02:18 - Starting the project
02:37 - parse_file()
04:35 - Understanding embeddings
05:40 - Implementing embeddings
07:01 - Timing embedding
07:35 - Caching embeddings
10:06 - Prompt embedding
10:19 - Cosine similarity for embedding comparison
12:16 - Brainstorming improvements
13:15 - Giving context to our LLM
14:29 - CLI input
14:49 - Next steps

КОМЕНТАРІ: 111
@decoder-sh
@decoder-sh Місяць тому
Thanks to @munchcup for sharing a great embedding model that is available straight from the ollama library ollama.com/library/nomic-embed-text 🔥 Also all of the code from this video is provided on my website decoder.sh/videos/rag-from-the-ground-up-with-python-and-ollama 👌
@munchcup
@munchcup Місяць тому
🙏 Am Humbled
@parttimelarry
@parttimelarry Місяць тому
This look solid. Happy it's not all LangChain-specific like many tutorials out there. Saving for later.
@decoder-sh
@decoder-sh Місяць тому
I'll cover langchain soon enough, but I wanted to start with a from-scratch implementation to teach the basics
@myhificloud
@myhificloud Місяць тому
@parttimelarry Your content was some of the first I absorbed as I entered the space. Spectacular and inspiring content. Looking forward to more if/when available, great work.
@decoder-sh
@decoder-sh Місяць тому
@parttimelarry Also I just subscribed to your account, congrats on 100k! I'd like to eventually explore the intersection of finance and LLMs :)
@dbwstein
@dbwstein Місяць тому
‘This is a great video! Im also looking forward to your langchain video. I’ve been struggling with that.
@mitchell2769
@mitchell2769 Місяць тому
As a nonprofessional programmer, this was the introductory best video on RAG I have seen anywhere, and that's saying a lot with how many I've watched. Thank you! I look forward to many more videos continuing the series!
@decoder-sh
@decoder-sh Місяць тому
Thank you for sharing, I'm happy to hear it!
@SergesLemo
@SergesLemo 17 годин тому
I second that.
@user-wr4yl7tx3w
@user-wr4yl7tx3w 18 днів тому
I really think UKposts should be recommending this channel. The content quality is very high.
@decoder-sh
@decoder-sh 17 днів тому
Thanks for saying so, welcome to my channel!
@edwardtaft8813
@edwardtaft8813 Місяць тому
Really great! Thank you. Look forward to the next steps here with langchain!
@zekodun
@zekodun Місяць тому
wow.. the first version at 12:30 actually got the answer right. The crock was a symbolism for ageing and thus the true villain to both the youthful Pan and the elder Capt. Hook
@decoder-sh
@decoder-sh Місяць тому
It should like I should give my system more credit 😅
@Gi-Home
@Gi-Home 25 днів тому
Excellent tutorial, thank you so much, your code example ran perfectly and the results were quite decent.
@decoder-sh
@decoder-sh 25 днів тому
I'm glad to hear it! Which embedding model did you end up using?
@yaa3g
@yaa3g Місяць тому
Well prepared and executed, super useful, thanks for your work.
@decoder-sh
@decoder-sh Місяць тому
Thank you very much! Looking forward to making more
@rakibuzzamanrahat
@rakibuzzamanrahat Місяць тому
Great videos, I just started in this space and started following your videos.
@decoder-sh
@decoder-sh Місяць тому
Welcome! I peeked at your Hands on with Machine Learning video - that's a great textbook :)
@proterotype
@proterotype 19 днів тому
Great stuff per usual. Looking forward to the LangChain video
@tinkerman1790
@tinkerman1790 Місяць тому
Thx for sharing this amazing tutorial showing us what RAG is as well as “how-to” in such a way of details. Keep your great work and I‘ve clicked “subscribe” button right away 👍🏻
@decoder-sh
@decoder-sh Місяць тому
Thank you for subscribing, I look forward to making more videos for you!
@txbluesguy
@txbluesguy Місяць тому
This is fantastic. It fits perfectly with a project I am working on.
@decoder-sh
@decoder-sh Місяць тому
That’s great! May I ask what you’re building?
@kushspatel
@kushspatel 18 днів тому
I love these videos, very helpful. I am a junior dev trying to understand some of these concepts and I feel like these videos have helped me immensely!
@decoder-sh
@decoder-sh 17 днів тому
I'm glad to hear it, keep learning!
@cj_is_here
@cj_is_here Місяць тому
Awesome video. Very well explained. Congratulations
@decoder-sh
@decoder-sh Місяць тому
Thank you CJ!
@BP-kc3dj
@BP-kc3dj Місяць тому
FANTATSTIC PRESENTATION! Thank you for being a good teacher. I stumbled on your channel after seeing the oppposite of what you did. I mean it was really bad. Thank You!
@decoder-sh
@decoder-sh Місяць тому
Haha well I'm sorry you had a bad experience before, but am glad you found your way here!
@SimplyCarolLee
@SimplyCarolLee 19 днів тому
I came here upon the recommendation of my good friend and this video is so educational. Loved it! ❤
@decoder-sh
@decoder-sh 19 днів тому
Your friend clearly has good taste, thanks for watching!
@CV-wo9hj
@CV-wo9hj 4 дні тому
Would love to see you do a version of this using nomic , llama3 and chroma DB for your vector stores
@CV-wo9hj
@CV-wo9hj 4 дні тому
Keep it up bud your videos are great 👍
@ronaldokun
@ronaldokun Місяць тому
I liked your overall presentation style. Objective and minimalist without being simplistic.
@decoder-sh
@decoder-sh Місяць тому
Thank you for watching!
@gustavow5746
@gustavow5746 Місяць тому
best video about rag so far in the web! congrats! your code runs! I think it needed an intro about ollama and how to run that. but besides it, this video is fantastic!
@decoder-sh
@decoder-sh Місяць тому
Not a bad idea! I do have a whole playlist on Ollama, so can choose where you want to jump in :) ukposts.info/slow/PL4041kTesIWby5zznE5UySIsGPrGuEqdB
@gustavow5746
@gustavow5746 Місяць тому
​@@decoder-sh I wanted to suggest to put a video on how to train a model on specific documents. We can see that this approach have some limitations such as the number of sentences passed as a context to the model and depending on the number, the output is different. For example, if you choose the last five sentences VS the last twenty, the output is a little different. Anyways I have to say that this is the first that combines all (embeddings, vectoring, LLM, documents and pure code) together. Congrats again! Look forward for your next content.
@decoder-sh
@decoder-sh Місяць тому
@@gustavow5746 Great idea! Model fine-tuning is definitely on my list :)
@gustavow5746
@gustavow5746 19 днів тому
@@decoder-sh hello, I run some tests and looks like these models are already trained with peter pan story. I tried to ask with and withou using the script, and the asnwers were very similar. Just wanted to give this feedback. thanks
@decoder-sh
@decoder-sh 18 днів тому
@@gustavow5746 Ah that's a great test to run, thank you for the information! I will take that into consideration for future videos.
@maheshsanjaychivateres982
@maheshsanjaychivateres982 Місяць тому
Thank you for this video lesson
@decoder-sh
@decoder-sh Місяць тому
You are welcome!
@DC-xt1ry
@DC-xt1ry Місяць тому
very very nice! thx for sharing!
@decoder-sh
@decoder-sh Місяць тому
You're on a roll, thanks for watching!
@mrrohitjadhav470
@mrrohitjadhav470 Місяць тому
Awesome lesson, precisely what I had been looking for. Please look at fine-tuning this existing model with many documents. I looked everywhere and couldn't locate one without utilising an API.
@decoder-sh
@decoder-sh Місяць тому
Thanks for watching! I plan to cover fine tuning soon, there are a lot of interesting techniques to address
@mrrohitjadhav470
@mrrohitjadhav470 Місяць тому
@@decoder-sh Great ❤
@munchcup
@munchcup Місяць тому
In the embedding part there is a very fast model specific for embedding in ollama named nomic embed text which simplifies the process.Just a point to note.
@decoder-sh
@decoder-sh Місяць тому
Awesome suggestion, thank you! Wow and they're MRL embeddings? I've been meaning to do a video on these ollama.com/library/nomic-embed-text
@munchcup
@munchcup Місяць тому
​@@decoder-shThank you for your teachings. You've opened me to endless possibilities in using ollama as a self taught dev.
@mbottambotta
@mbottambotta Місяць тому
Thanks for your clear and effective explanation. I'm grateful also for how you steered clear from frameworks that abstract the RAG implementation details away from you. In fact, I'd appreciate if you could dive deeper into things like chunking strategies and agents.
@decoder-sh
@decoder-sh Місяць тому
Thanks for watching! I do plan on covering these topics in future videos. I'm currently writing a series of videos on more advanced RAG topics using either llamaindex or langchain. I might do one that looks at "manual" pdf parsing versus builtin document loaders if people are interested
@mbottambotta
@mbottambotta Місяць тому
@@decoder-sh Thanks! Much appreciated, looking forward to your future videos.
@laritaharrington1117
@laritaharrington1117 24 дні тому
@@decoder-shI am very new to ai and coding. How would you suggest I truly understand what it is these tools are doing. I have a great business plan but my mind thinks as a workflow not like a programmer. Great content!
@mbottambotta
@mbottambotta 5 днів тому
@@laritaharrington1117 Here's what I do: first, I code along with the video. Typically, I make mistakes and have to fix them; this already helps me understand better. Then, I come up with my own little project and try and implement that. This usually takes far longer than I had originally planned for, but it does mean that I learn about the limitations and pitfalls. If you like, we can do a little project together; learning this way is probably even more effective.
@BigSenior472
@BigSenior472 Місяць тому
Thanks!
@mistercakes
@mistercakes Місяць тому
Little confused about how there was similarity with "who is the story's primary villain". How can we know it's RAG that is providing this answer and not the inference model? Would also be nice to see what was the similar chunks and then convert them back to string to understand what the model got as input before it responded with the answer about "Hook". I think that's the only major thing missing from your tutorial. Thanks again for the good content.
@decoder-sh
@decoder-sh Місяць тому
Thank you for this feedback! This is a great point, I should've kept logging the chunks that are being passed to context, I'll keep that in mind for the next video. I encourage you to try this on your own system at home, but Mistral is good enough at instruction following that it adhered to our system prompt and only used the context it was provided. I'll try to explicitly show a failure case to demonstrate that our model is behaving as expected in future videos. Thanks for watching :)
@Djeez2
@Djeez2 Місяць тому
You could test that by replacing "Hook" in the returned chunks by some other name and then see what the LLM returns as an answer.
@decoder-sh
@decoder-sh Місяць тому
@@Djeez2Yep this is a great idea. You could even set up unit tests for different models that either include or don't include the answer to a given question. The only challenge there is getting the model to say exactly "I can't answer that with the given context" or something that plays nice with simple tests.
@Aberger789
@Aberger789 8 днів тому
Fantastic video! I've been trying to come up with a streamlit chat with your document but also have historical context of the chat. But, I'm beginning to wonder if that's not as useful as just sequencing it out considering each time will require context retrieval. All good
@user-lq1md3dw9z
@user-lq1md3dw9z Місяць тому
This is the best, easy to understand intro of RAG. Do I need to install ollama on my Windows machine before running your code? And replace 'nomic-embed-text' with the model when running your code. Could you please do a video llama 2?I downloaded llama 2 from Meta, but don't know how to use it, such as how to open it or query it. Your code-driven approach best explains everything.
@decoder-sh
@decoder-sh Місяць тому
Thanks for watching! You will need to install Ollama. Furthermore, you can actually use the llama-2 model directly from ollama! ollama.com/library/llama2
@skyblaze6687
@skyblaze6687 3 дні тому
thnx man in million i really hate those xxxx lama index people absurd way of intentionally forcing their own set of storage
@AgustinVinao
@AgustinVinao Місяць тому
Great video. Looking forward to see a 2nd part with Lola an index and what do you recommend for working with pdf files!
@decoder-sh
@decoder-sh Місяць тому
Definitely! I could do a whole video just on working with PDFs, it can be a pain 😰
@emil8367
@emil8367 Місяць тому
many thanks ! that's awesome what you share with us. Would be great to see more how to use bge-base 🙂 or not, actually I watched your YT video to change gguf file to ollama model and replaced it in: prompt_embedding = ollama.embeddings(model="bge-base-en-16", prompt=prompt)["embedding"], seems working, but I need to test it more 🙂 in this case maybe you could create a tutorial how to build from scratch a speech to speech multimodal assistant operating eg on Ubuntu ?
@juliandarley
@juliandarley 10 днів тому
this is very well done, thank you. when is your next video and will it be on refining and improving RAG?
@decoder-sh
@decoder-sh 9 днів тому
My next videos will be discussing the new models from meta and microsoft, and I'm working on another one that introduces langchain :)
@robots_id9112
@robots_id9112 Місяць тому
great videos! i want to ask how do we set up it so it can answer multiple follow up questions while retaining same context?
@decoder-sh
@decoder-sh Місяць тому
The chat method takes a list of messages, so you can just add your previous interactions to that list to give your LLM context. Here's an example ukposts.info/have/v-deo/kniLf4aksaJzto0.html
@doctorbill37
@doctorbill37 Місяць тому
Excellent video, explaining from the ground up how RAG works without immediately starting with LangChain. I have played with some other RAG implementations using Ollama and have intentionally asked questions outside of the purview of the RAG content and they do come back with the correct "not found in the documentation" response. But what still remains unclear to me is how an LLM is kept restricted to only accessing RAG content. How is that fence guaranteed? Does setting the temperature affect that fence?
@decoder-sh
@decoder-sh Місяць тому
It's difficult to make guarantees with LLMs, but your best bet is to use a model whose training data includes instruction following. Mistral is good at this by default, but also has an instruct fine-tuned version available on ollama!
@khalidkifayat
@khalidkifayat 23 дні тому
NICE One. ..question was how to give this to client as a remote work task project ?? and what are the cost optimization factors to be considered ?
@MidSeasonGroup
@MidSeasonGroup Місяць тому
Great video. Do you recommend RAG as an ideal way to update llms with framework libraries changes and updates or is finetuning the way to go?
@decoder-sh
@decoder-sh Місяць тому
Yes I think RAG is definitely the best way to empower your LLM to give up-to-date answers about dynamic data like documentation. You'll just need to update your embeddings when the data changes.
@J4M13M1LL3R
@J4M13M1LL3R Місяць тому
LETS GO
@decoder-sh
@decoder-sh Місяць тому
💻🦆
@myhificloud
@myhificloud Місяць тому
Curious, do you github? Thanks for another great video and useful content, very much appreciated.
@decoder-sh
@decoder-sh Місяць тому
Yes I have githubbed once or twice, why do you ask? I appreciate you watching, it means a lot!
@myhificloud
@myhificloud Місяць тому
@@decoder-sh Thank you for your reply. I am not a developer, yet learning/absorbing. Github key points (for my workflow): - presents a reliable single source of truth - centralized repository for rapidly growing/scaling projects - serves as a hub to linked resources (e.g. websites, youtube, code, updates, changes, new projects, etc) - project versioning - lowers barrier to entry, while simplifying versioned resource aggregation for everyone, at every level.
@jernr
@jernr 10 днів тому
Does anyone know how to increase the size of the paragraphs so that the responses are more useful? That part was skipped over in the video, if I recall correctly.
@user-wr4yl7tx3w
@user-wr4yl7tx3w 18 днів тому
do you think it's possible to use this, ollama+rag, in conjunction with crewai?
@decoder-sh
@decoder-sh 18 днів тому
Definitely! I know that CrewAI has rag tools and can use ollama for inference. I plan on covering CrewAI in a future video :)
@zeeshanfakhar1933
@zeeshanfakhar1933 22 дні тому
Hi, What are your computer specs for running ollama?
@decoder-sh
@decoder-sh 21 день тому
Hi, I’m using an M1 MacBook Pro, but ollama itself has minimal requirements (you don’t even need a gpu!). It really comes down to the model you’re trying to run.
@aroonsubway2079
@aroonsubway2079 14 днів тому
Thanks for this great video. Just a newbie question from me: how to formulate RAG setup to make Ollama understand a specific text format? I provided mixtral a list of timed activites for a person like this: Day1 7-00-00 : breakfast Day1 7-00-08 : workout Day1 7-00-16 : reading ...... Day20 7-00-00 : breakfast Day20 7-00-08 : reading Day20 7-00-16 : workout I asked LLM to print out all instances of activities at 7-00-00 across all these 20 days, which is super easy for human, but the results from LLM were always wrong ... Can you give me some instructions? Is RAG suitable for processing such text data?
@decoder-sh
@decoder-sh 13 днів тому
With data that is as cleanly structured as this, why couldn't you just write some simple code to filter the results for you? Like `data.split(' ').map(line => line.split(' ')).filter(([day, time, topic]) => time === '7-00-00')` ? Or you could put your data into a database, give the schema to an LLM and have it write queries for you - this is called self querying python.langchain.com/docs/modules/data_connection/retrievers/self_query/
@mbarsot
@mbarsot Місяць тому
hello. Two separate questions. a) Your code works (thanks) and I'm using it for a task at my workplace (basically help in extracting info from a repository of dozens of .docx). However it seems that the model only know about the input file (the embeddings) and "forgot" everything else. That is, I can query the txt and get mostly correct answers, but if my answer need to be complemented by additional info (that I know is available if I query mistral on ollama normally) it does not have a clue. Or am i doing something wrong? the "augmented" in RAG seems to indicate that we add (maybe with higher priority) the knowledge in the input docs to the existing one, but it seems not to be the case...
@decoder-sh
@decoder-sh Місяць тому
Hi there, I'm glad to hear the code works! It sounds like you need to change the system prompt, since we're explicitly telling it to only use the provided context and no knowledge that it already has. Let me know if that works!
@mbarsot
@mbarsot 29 днів тому
@@decoder-sh thank you for your answer. I changed the systemp prompt as follows, but no change... : "SYSTEM_PROMPT = """You are a helpful reading assistant who answers questions based on snippets of text provided in context and also with general knowledge. Be as concise as possible. If you're unsure, just say that you don't know. Reply in the same language used by the user question Context: """ (PS i added the language thing as it is digesting documents in french but I want answers in EN.)
@mohamedkeddache4202
@mohamedkeddache4202 Місяць тому
please, can someone tell me how to activate streaming on the response?
@grumpyguy7656
@grumpyguy7656 3 дні тому
wow my PC took 269.115 to embed 85 sentences with tinydolphin. I think I need to use the api key method.
@squiddymute
@squiddymute Місяць тому
can you have embedding with images ? using the llava model ?
@decoder-sh
@decoder-sh Місяць тому
Very interesting question! It's definitely possible to create image embeddings, and in fact that's how image search works on Google. However I don't think you can create image embeddings directly from Ollama. It looks like Llava embeds images using the CLIP model internally. github.com/haotian-liu/LLaVA/blob/main/llava/model/multimodal_encoder/clip_encoder.py huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip
@decoder-sh
@decoder-sh Місяць тому
Interesting conversation on the topic x.com/willdepue/status/1772050291757850826?s=46
@PenicheJose1
@PenicheJose1 15 днів тому
Do I have to install Ollama in every new python environment I create?
@decoder-sh
@decoder-sh 15 днів тому
Yes you will need to install the Ollama python library in every new python environment you create, no you will not need to install the ollama application (which serves the models and API that the python library talks to) for each new python environment
@PenicheJose1
@PenicheJose1 15 днів тому
@@decoder-sh I see, thank you!
@Aristocle
@Aristocle 23 дні тому
Could you give a similar example, but using graph databases like neo4js?
@decoder-sh
@decoder-sh 21 день тому
Yes I would love to do something with graph databases! Do you have a specific use case in mind?
@Aristocle
@Aristocle 21 день тому
​@@decoder-sh Try texts with examples of Aristotelian syllogisms(from which propositional logic originated), to see if it holds the thread of reasoning. 😁 Are short, but hard to follow. Or logical document in general, that you can see if it does good or bad.
@wetcel1236
@wetcel1236 Місяць тому
Great serving! Another relevant brick in my personal AI wall. Thanks so much!
@decoder-sh
@decoder-sh Місяць тому
Hey, teacher! Leave those AIs alone! 🧱 Thanks for watching
@NicosoftNT
@NicosoftNT Місяць тому
Very interesting thanks. Do you know how this compares to Haystack 2.0? I understand that Haystack being a framework offers more scalability while this is more of a DIY script?
@decoder-sh
@decoder-sh Місяць тому
This is my first time hearing about Haystack, but this is definitely just a simple DIY script to build an understanding of what's happening under the hood of more advanced libraries like Haystack, Langchain, etc
Python RAG Tutorial (with Local LLMs): AI For Your PDFs
21:33
pixegami
Переглядів 41 тис.
Text Embeddings, Classification, and Semantic Search (w/ Python Code)
24:30
Анита просто на химии, поэтому такая сильная
00:21
Женя Лизогуб SHORTS
Переглядів 2,8 млн
I PUT MY ARMOR ON (Creeper) (PG Version)
00:19
Sam Green
Переглядів 6 млн
What is Retrieval-Augmented Generation (RAG)?
6:36
IBM Technology
Переглядів 438 тис.
Ollama 0.1.26 Makes Embedding 100x Better
8:17
Matt Williams
Переглядів 40 тис.
Installing Ollama to Customize My Own LLM
9:20
Decoder
Переглядів 20 тис.
Supercharge your Python App with RAG and Ollama in Minutes
9:42
Matt Williams
Переглядів 22 тис.
LLM Chat App in Python w/ Ollama-py and Streamlit
22:28
Decoder
Переглядів 4,5 тис.
Image Annotation with LLava & Ollama
14:40
Sam Witteveen
Переглядів 14 тис.
Let's use Ollama's Embeddings to Build an App
8:21
Matt Williams
Переглядів 14 тис.
Meta's Llama3 - The Mistral Killer?
13:52
Decoder
Переглядів 1,2 тис.
I Analyzed My Finance With Local LLMs
17:51
Thu Vu data analytics
Переглядів 358 тис.
С Какой Высоты Разобьётся NOKIA3310 ?!😳
0:43
Koshyl_Live
Переглядів 277 тис.
Секретная функция ютуба 😱🐍 #shorts
0:14
Владислав Шудейко
Переглядів 2,1 млн