Webinar
Microchip Optimizes LLM Chatbot with RAG and a Knowledge Graph
There's often a gap between the people who manage data and those who
use it to make decisions. This gap exists because using the data
typically requires technical skills, like writing database queries.
William Firth from Microchip Technology will show how Large Language
Models (LLMs), augmented with Retrieval Augmented Generation (RAG)
techniques, can bridge this gap, making it easy for anyone to access
and understand data in real time.
William will share a case study on how Microchip uses LLMs enriched with RAG using a Memgraph knowledge graph to create a contextual chatbot. This was inspired by the need to quickly answer complex business questions such as "Why is this customer's sales order late?" William will discuss the implementation process, the advantages it has brought to the table, and the unforeseen challenges encountered with a real-world chatbot.
What you will learn:
William will share a case study on how Microchip uses LLMs enriched with RAG using a Memgraph knowledge graph to create a contextual chatbot. This was inspired by the need to quickly answer complex business questions such as "Why is this customer's sales order late?" William will discuss the implementation process, the advantages it has brought to the table, and the unforeseen challenges encountered with a real-world chatbot.
What you will learn:
- How knowledge graphs provide the grounding LLMs need
- How to build a chatbot using LLMs, RAG, and a knowledge graph to answer intricate business questions like "Why is this customer's order late?"
- Project implementation process, benefits of prompt engineering, and the unforeseen challenges of a real-world chatbot