Memgraph logo
Back to blog
How Microchip Uses Memgraph’s Knowledge Graphs to Optimize LLM Chatbots

How Microchip Uses Memgraph’s Knowledge Graphs to Optimize LLM Chatbots

April 25, 2024
Sara Tilly

How to integrate Large Language Models (LLMs) with knowledge graphs? Microchip Technology recently highlighted this in a Memgraph-hosted webinar. They demonstrated how Memgraph’s knowledge graphs enhance their LLM-powered chatbots. The session focused on using Retrieval Augmented Generation (RAG) to optimize chatbot interactions. This makes chatbots more context-aware and responsive. Join us as we explore key insights from the webinar.

If you missed the session, here’s the full webinar recording: Microchip Optimizes LLM Chatbot with RAG and a Knowledge Graph.

Talking Point 1: Integration of LLMs and Knowledge Graphs

LLMs typically generate responses based solely on the data they were trained on. This often leads to answers that lack accuracy or relevance in specific contexts because they do not consider real-time or domain-specific data. This is where knowledge graphs come in.

Knowledge graphs address this limitation by providing structured, interconnected data that adds real-world context to the responses generated by LLMs. This integration enhances the accuracy and reliability of LLM outputs by grounding them in verifiable data.

Talking Point 2: Enhancement Technique RAG

RAG improves LLM responses by introducing a retrieval phase where the system actively searches for and retrieves information from external sources like databases or the internet before generating responses.

This approach enriches the LLM's responses with up-to-date and context-specific data, leading to more informed and relevant outputs. The technique significantly reduces the risk of generating hallucinations.

Talking Point 3: Practical Demo Using “Game of Thrones”

The demo illustrates the difference between responses from a standalone LLM and those enhanced with a knowledge graph by using examples from the "Game of Thrones" series.

The knowledge graph-enhanced LLM provided more accurate and detailed answers by leveraging complex relationships and data points about the series' characters and plotlines.

Talking Point 4: Application in Real-World Business Solutions

Our guest, William Firth, Senior Data Scientist at Microchip Technology, detailed how Microchip transitioned from using knowledge graphs and LLMs in theoretical or controlled scenarios to applying them in real-world business environments.

He highlighted a customer service chatbot that uses a knowledge graph to quickly provide detailed and accurate information about customer orders and logistics, improving service efficiency and customer satisfaction.

Talking Point 5: Custom LLM Implementation and Data Privacy

To address data privacy concerns and improve system integration, Microchip developed

its own custom LLM. This LLM is specifically tailored to interact seamlessly with their internal graph database. The custom LLM allows Microchip to maintain control over sensitive data and processes, ensuring that data privacy is upheld by avoiding reliance on public APIs like ChatGPT, which might not guarantee the same level of data confidentiality.

Talking Point 6: Scalability and Operational Efficiency

We discussed how scalability is critical when integrating LLMs with knowledge graphs in a business environment. Addressing scalability ensures that the solutions are effective and viable across larger organizational structures or more complex query environments.

Memgraph’s ability to handle extensive data nodes and relationships without performance degradation plays a key role in scaling applications. This capability allows businesses like Microchip to implement knowledge graph-enhanced LLMs across various departments and customer-facing applications.


To add value, here is a summary of some of the questions asked during the webinar's Q&A session, along with answers derived from the discussion:

1. Have you seen failures or hallucinations in the agent's responses? If you had to give a percentage, what value would it be?

  • William: We don't see so many hallucinations, at least in the end result, because what we're generating is a Cypher query. And so either that query is going to run, or it's not going to run, or it's going to give you the right information or it's not. We tend to see that we get the Cypher query generation wrong, or it leaves off part of the question.

2. How have you navigated the challenges of crafting your graph model and ensuring the LLM comprehends it effectively? Were there specific obstacles related to the precision of edge and node typologies and naming conventions?

  • William: The intuitiveness of the graph is really important, especially when you are trying to apply this method to an existing graph database where you might not have had to worry about the intuitiveness of the naming conventions previously. Ensuring that our graph model is intuitive is crucial because it helps the LLM to understand and interact with it more effectively. The precision in how nodes and edges are typified and named can significantly influence the performance and accuracy of the LLM responses. This is so important because it directly affects the LLM's ability to correctly interpret and utilize the interconnected data within the graph.

3. How do you handle it if the graph schema exceeds the content maximum token of the model?

  • William: I haven't encountered an issue where the graph schema is so big that it maxes out the token limit. If that's the case, your schema is probably too big, so I would try to crop it. I don't think native tools exist to do this, but you don't necessarily have to pass your entire schema into the prompt.


This webinar focused on the practical implementation and advantages of using knowledge graphs integrated with Large Language Models (LLMs) to enhance chatbot functionalities. We’ve emphasized how these technologies can bridge the gap between data and decision-makers, improving business processes and customer service efficiency.

Watch the full webinar recording to get all the details. If you want to discuss and see how you can use knowledge graphs in your environment, book a 30 min call with our DX team.

Next steps?

Here's how you can get started quickly with Memgraph and chatbots.

  1. Install Memgraph

  2. Install LangChain and the appropriate client library. We recommend Python for its ease of use.

  3. Initialize a MemgraphGraph connection with LangChain

  4. Import your data into Memgraph and refresh the schema.

  5. Create GraphCypherQAChain with MemgraphGraph and an LLM model to process natural language queries for your chatbot.

Further Reading

Here are resources our guest speaker William, recommends you use to build a chatbot.

Join us on Discord!
Find other developers performing graph analytics in real time with Memgraph.
© 2024 Memgraph Ltd. All rights reserved.