Before we start | |||
Why this course is different | 00:01:00 | ||
Prerequisites | 00:01:00 | ||
Essential topics and terms (theory) | 00:04:00 | ||
Why this course does not cover Open Source models like LLama2 | 00:01:00 | ||
Optional: Install Visual Studio Code | 00:02:00 | ||
Get the source files with Git from Github | 00:02:00 | ||
Create OpenAI Account and create API Key | 00:02:00 | ||
Preparation | |||
Setup of a virtual environment | 00:03:00 | ||
Setup OpenAI Api-Key as environment variable | 00:03:00 | ||
Exploring the vanilla OpenAI package | 00:03:00 | ||
LangChain Basics | |||
LLM Basics | 00:07:00 | ||
Prompting Basics | 00:02:00 | ||
Theory: Prompt Engineering Basics | 00:02:00 | ||
Few Shot Prompting | 00:05:00 | ||
Chain of thought prompting | 00:02:00 | ||
Pipeline-Prompts | 00:04:00 | ||
Prompt Serialisation | 00:03:00 | ||
Chains - From basic to advanced chains | |||
Introduction to chains | 00:01:00 | ||
Basic chains – the LLMChain | 00:03:00 | ||
Response Schemas and Output Parsers | 00:06:00 | ||
LLMChain with multiple inputs | 00:02:00 | ||
Sequential Chains | 00:04:00 | ||
Router Chains | 00:04:00 | ||
Callbacks | |||
Call backs | 00:05:00 | ||
Memory | |||
Memory basics – Conversation Buffer Memory | 00:04:00 | ||
Conversation Summary Memory | 00:03:00 | ||
EXERCISE: Use Memory to build a streamlit Chatbot | 00:01:00 | ||
SOLUTION: Chatbot with Streamlit | 00:03:00 | ||
OpenAI Function Calling | |||
OpenAI Function Calling – Vanilla OpenAI Package | 00:08:00 | ||
Function Calling with LangChain | 00:04:00 | ||
Limits and issues of the langchain Implementation | 00:03:00 | ||
Retrieval Augmented Generation (RAG) | |||
RAG – Theory and building blocks | 00:03:00 | ||
Loaders and Splitters | 00:04:00 | ||
Embeddings – Theory and practice | 00:04:00 | ||
Vector Stores and Retrievers | 00:07:00 | ||
RAG Service with Fast API | 00:05:00 | ||
Agents | |||
Agents Basics – LLMs learn to use tools | 00:06:00 | ||
Agents with a custom RAG-Tool | 00:07:00 | ||
Chat Agents | 00:03:00 | ||
Indexing API | |||
Indexing API – keep your documents in sync | 00:02:00 | ||
PREREQUISITE: Docker Installation | 00:01:00 | ||
Setup of PgVector and Record Manager | 00:04:00 | ||
Indexing Documents in practice | 00:06:00 | ||
Document Retrieval with PgVector | 00:03:00 | ||
LangSmith | |||
Introduction to Lang Smith (User Interface and Hub) | 00:02:00 | ||
Lang Smith Projects | 00:07:00 | ||
Lang Smith Datasets and Evaluation | 00:13:00 | ||
Microservice Architecture for LLM Applications | |||
Introduction to Microservice Architecture | 00:04:00 | ||
How our Chatbot works in a Microservice Architecture | 00:02:00 | ||
Introduction to Docker | 00:05:00 | ||
Introduction to Kubernetes | 00:02:00 | ||
Deployment of the LLM Microservices to Kubernetes | 00:13:00 | ||
LangChain Expression Language (LCEL) | |||
Intro to Lang Chain Expression Language | 00:01:00 | ||
LCEL Part 1 – Pipes and OpenAI Function Calling | 00:07:00 | ||
LCEL – Part 2 – Vector Stores, Item Getter, Tools | 00:06:00 | ||
LCEL – Part 3 – Arbitrary Functions, Runnable Interface, Fallbacks | 00:07:00 |
Membership renews after 12 months. You can cancel anytime from your account.