If you work with streaming data or are interested in near real-time graph analytics, Memgraph 2.1 should be interesting to you. We have announced some big news in graph streaming and accompanied it with articles, tutorials, a video overview, and a live demo. This article aims to summarise all the important things related to the launch of Memgraph 2.1. so you can read up on the relevant information as quickly as possible. So, here we go 👇
A couple of weeks passed after our big 2.0 launch, but we didn’t stop adding new features to Memgraph. For this Memgraph 2.1 release, we focused on providing you with more options for ingesting your data into Memgraph, and we also enhanced the already existing methods. These are some of the release key points:
Head of Core Antonio wrote the announcement and covered a lot of details about the new release.
Read more about it in the announcement.
Alongside Antonio, CTO Buda also covered the release. He recorded a short, less than 4-minutes long video about what’s new in Memgraph 2.1.
Watch it on YouTube.
A couple of days after the launch, we organised a Memgraph 2.1 live demo. No worries if you missed it live because the recording is available on our YouTube channel.
The demonstration was hosted by CEO Dominik & CTO Buda, founders of Memgraph. Ivan from DevRel team joined them.
Timestamps:
Also, our viewers had some intriguing questions:
So, what is librdkafka, and how does it handle offsets? Librdkafka is a C client with C++ wrappers that implements the Kafka protocol, and offset management for consumers is done automatically unless configured otherwise. This means that for manual offset management, each time Memgraph processes a Kafka message successfully, it must instruct librdkafka to commit that offset.
Read more about what Kostas from the Core Engineering team learned by implementing offset management for Apache Kafka consumers.
Alongside live demo, Ivan from the DevRel team was also responsible for tutorial how to analyze streaming data with Redpanda.
This tutorial focuses on processing real-time movie ratings streamed through Redpanda, a Kafka-compatible event streaming platform. This data can be used to generate movie recommendations with the help of a graph database and the Cypher query language.
Learn more about it on our blog.
Since we have a tutorial with Redpand, Katarina from the DevRel team decided it would be right to make one with Pulsar as well. In this tutorial, you’ll learn how to create a graph schema from the Art Blocks dataset (Ethereum based NFT platform), stream your data with Apache Pulsar and Memgraph, and how to analyse your streaming data.
While Apache Kafka may be the most popular solution for data streaming needs, Apache Pulsar has picked up a lot of popularity in recent years. While both have their pros and cons, there are specific use cases that fit each product better, but it seems that Kafka has become the de-facto solution for most problems, given its popularity. We will try to provide you with a high-level overview of their similarities and differences so you can make a more informed and clear choice.
Learn more about pros & cons of Pulsar and Kafka.
It has been an exciting journey to get to Memgraph 2.1. Here are the people behind it.
Our growing community of developers is here to help unlock a whole new world of graph-based applications on top of your streaming data. Engage in meaningful and valuable conversations with other Memgraph developers and the Memgraph team. We are all here with the same goal - building world-class graph applications.
If you’d like to take Memgraph for a spin, you can download it for free.