Announcing GQLAlchemy 1.2 - Developing Python applications with graph databases

by
Ivan Despot
Announcing GQLAlchemy 1.2 - Developing Python applications with graph databases

Have you ever heard of SQLAlchemy? Well, we are developing something similarly awesome. The main difference is that GQLAlchemy isn’t used with relational databases but with graph databases, starting with Memgraph. We are a long way from achieving the same level of awesomeness as SQLAlchemy, but we are slowly moving in the right direction.

GQLAlchemy allows you to interact with your graph database without having to use the Cypher query language. Because it’s an OGM (Object Graph Mapper), you can use Python to import and query the data, save parts of your data to an on-disk database, enforce a graph schema, and much more.

The new release, GQLAlchemy 1.2, includes two new features and a ton of bug fixes and quality of life improvements. Don’t feel like reading? You can also watch a short demo where we went over the most important features found in this update.

gqlalchemy

New features:

Important bug fixes and improvements:

  • Fixed label inheritance.
  • Added where_not(), and_not(), or_not() and xor_not() methods.
  • Improved order_by() method from query builder by changing its argument types.
  • Added an option to create a label index.
  • Added batch save methods for saving nodes (save_nodes()) and saving relationships (save_relationships()).
  • Added load_csv() and xor_where() methods to the query builder.

If you want to check out all the fixes and improvements, take a look at the changelog.

Import data from Azure Blob, AWS S3 or local storage

Currently, we support reading of CSV, Parquet, ORC and IPC/Feather/Arrow file formats via the PyArrow package. You can import data into your graph database directly from sources like Amazon S3, Azure Blob storage and local storage. It’s also possible to extend the importer with different data sources and file formats. If you end up implementing your own importer, why not make a [pull request](Currently, we support reading of CSV, Parquet, ORC and IPC/Feather/Arrow file formats via the PyArrow package.) and the feature to the next GQLAlchemy release?

how-to-use-the-gqlalchemy-data-loader

Starting and managing Memgraph instances directly in Python

We added a new module called instance_runner and it allows you to start, stop and check Memgraph instances from your Python code with GQLAlchemy. Instead of opening a new terminal and starting Memgraph or writing a Python or bash script that will do that for you, just use the instance_runner.

It’s possible to start Memgraph instances either from binary files (Memgraph installed on Linux or built from source) or using Docker. Here is a preview of how to start a Memgraph instance with Docker, run a query and return results:

from gqlalchemy.instance_runner MemgraphInstanceDocker

memgraph_instance = MemgraphInstanceDocker()
memgraph = memgraph_instance.start_and_connect()

print(memgraph.execute_and_fetch("RETURN "Memgraph is running" AS result"))[0]["result"])
how-to-use-the-gqlalchemy-instance-runner

What’s next

This release may be over, but we have already started developing features for the next one. If you have any suggestions or requests, why not drop us an issue on GitHub?

Maybe you also have an idea of what we could implement next. Join our Discord server and share your thoughts. Stay tuned for the next release, and in the meantime: pip install gqlalchemy.
Happy coding!

Table of Contents

Get a personalized Memgraph demo
and get your questions answered.

Continue Reading

embark-on-the-fraud-detection-journey-by-importing-data-into-memgraph-with-python
Use Cases
Fraud Detection
Python
GQLAlchemy
Embark on the Fraud Detection Journey by Importing Data Into Memgraph With Python

Are you reluctant to switch from a relational database to a graph databases to explore fraud because you believe you first need to be proficient in Cypher to correctly import the data? Be rest assured - there is a Python-friendly approach available within Memgraph!

by
Bruno Sacaric
November 16, 2022
networkx-developers-say-farewell-to-the-boilerplate-code
Python
NetworkX
NetworkX Developers, Say Farewell to the Boilerplate Code

If you are spending more time writing code to develop, deploy and manage your graph projects, it’s time you tried Memgraph. It will allow you to focus on the data analysis and free you from all that time-consuming coding.

by
Katarina Supe
November 14, 2022
who-ranks-better-memgraph-vs-networkx-pagerank
Python
NetworkX
Comparison
Who ranks better? Memgraph vs NetworkX PageRank

Are your NetworkX algorithms taking even more and more time to produce the results you need to finish up your research? Or the application reached a critical point and its starting to lag due to increase in data analysis? Could Memgraph tackle the same computations in less time? I think you probably know the answer is “Doh!” but here are the numbers to prove it.

by
Katarina Supe
November 8, 2022