Course
Retrieval Augmented Generation (RAG) with LangChain
Included withPremium or Teams
Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.Loved by learners at thousands of companies
Training 2 or more people?
Try DataCamp for BusinessCourse Description
Build RAG Systems with LangChain
Retrieval Augmented Generation (RAG) is a technique used to overcome one of the main limitations of large language models (LLMs): their limited knowledge. RAG systems integrate external data from a variety of sources into LLMs. This process of connecting multiple different systems is usually tedious, but LangChain makes this a breeze!Learn State-of-the-Art Splitting and Retrieval Methods
Level-up your RAG architecture! You'll learn how to load and split code files, including Python and Markdown files to ensure that splits are "aware" of code syntax. You'll split your documents using tokens instead of characters to ensure that your retrieved documents stay within your model's context window. Discover how semantic splitting can help retain context by detecting when the subject in the text shifts and splitting at these points. Finally, learn to evaluate your RAG architecture robustly with LangSmith and Ragas.Discover the Graph RAG Architecture
Flip your RAG architecture on its head and discover how graph-based, rather than vector-based RAG systems can improve your system's understanding of the entities and relationships in your documents. You'll learn how to convert unstructured text data into graphs using LLMs to do the translation! Then, you'll store these graph documents in a Neo4j graph database and integrate it into a wider RAG system to complete the application.Prerequisites
Developing LLM Applications with LangChainBuilding RAG Applications with LangChain
Improving the RAG Architecture
Introduction to Graph RAG
Complete
Earn Statement of Accomplishment
Add this credential to your LinkedIn profile, resume, or CVShare it on social media and in your performance review
Included withPremium or Teams
Enroll NowFAQs
What will I learn about in this course?
This course will teach you how to integrate external data into large language models (LLMs) to extend their knowledge and make them more use-case-specific.
Who is this course intended for?
This course is suitable for software engineers, developers (aspiring or otherwise), and anyone interested in learning how to integrate LLMs into user-facing applications. Familiarity with using LangChain is expected, along with concepts related to large language models (LLMs).
What is LangChain and why is it useful?
LangChain is a framework for developing applications powered by large language models (LLMs). It provides a single, unified syntax for connecting the various components used in these applications, including LLMs, prompt templates, and vector databases.
What is Retrieval Augmented Generation (RAG)?
Retrieval Augmented Generation, or RAG, is the process of providing large language models (LLMs) with new, relevant information that helps them respond more effectively to user inputs. Typically, this works by retrieving the documents that are the most semantically similar to the user input, and adding these documents to the model's prompt. This course will dive deep into this process, as well as investigating graph-based RAG.
What is Graph RAG?
Graph RAG is an alternative to traditional vector-based RAG, where text data is transformed into a graph with nodes and relationships, and information can be retrieved by querying the information from the graph. This method has a few advantages over vector-RAG, including better capturing the relationships between entities, which are largely ignored in vector RAG.
Join over 19 million learners and start Retrieval Augmented Generation (RAG) with LangChain today!
Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.