Announcing GQLAlchemy 1.1

by
Mislav Vuletic
Announcing GQLAlchemy 1.1

GQLAlchemy has some pretty cool new features to show. Similar to its counterpart SQLAlchemy, GQLAlchemy aims to be an Object Graph Mapper (OGM). It’s a Python library that abstracts graph objects found in graph databases into Python objects. Through this update, we’ve added automatic schema validation, serialization and deserialization, on-disk storage and a few more useful features. You can now store Python objects such as User, Molecule and Junction straight into a graph database, without having to write a single line of Cypher.

Don’t feel like reading? You can also watch a short demo where we went over the most important features found in this update.

gqlalchemy

Automatic schema validation, serialization and deserialization

from gqlalchemy import Field, Memgraph, Node, Relationship
from datetime import datetime


db = Memgraph()

class User(Node):
    id: str = Field(index=True, unique=True, exists=True, db=db)
    
class FriendsWith(Relationship, type="FRIENDS_WITH"):
    last_seen: datetime = Field()
    
john = User(id=1).save(db)
boris = User(id=2).save(db)
friends = FriendsWith(
    _start_node_id=john._id
    _end_node_id=boris._id
    last_seen=datetime.now(),
).save(db)

Save Python classes straight into a graph database and let query results be converted into Python objects automatically as well. Use the power of creating a schema to generate uniqueness and existence constraints as well as indexes for you. Using a graph database with Python has never been easier.

Store large properties with on-disk storage (SQL or a Key-Value store)

Since Memgraph is a graph database that stores data in memory (RAM), it’s great that GQLAlchemy provides an on-disk storage solution for large properties not used in graph algorithms.

from gqlalchemy import Memgraph, SQLitePropertyDatabase, Node, Field
from typing import Optional


graphdb = Memgraph()
SQLitePropertyDatabase('path-to-my-db.db', graphdb)

class User(Node):
    id: int = Field(unique=True, exists=True, index=True, db=graphdb)
    huge_string: Optional[str] = Field(on_disk=True)
    
my_secret = "I LOVE DUCKS" * 1000
john = User(id=5, huge_string=my_secret).save(db)
john2 = User(id=5).load(db)
print(john2.huge_string)  # prints I LOVE DUCKS, a 1000 times

An on-disk property database can be used to:

  • store large properties (strings, parquet files, etc.)
  • create code without having to handle properties that are stored in different solutions separately.

Saving properties is fully abstracted, so you won’t have to worry about where stuff is saved.

Integrate your code with streaming support and triggers

From now on, you won’t have to create and manage data streams and database triggers directly with Cypher queries. You can instead use GQLAlchemy to accomplish these tasks programmatically in Python.

from gqlalchemy import Memgraph, MemgraphKafkaStream, MemgraphTrigger


db = Memgraph()

stream = MemgraphKafkaStream(name="ratings_stream", topics=["ratings"], transform="movielens.rating", bootstrap_servers="localhost:9092")
db.create_stream(stream)
db.start_stream(stream)

trigger = MemgraphTrigger(
    name="ratings_trigger",
    event_type=TriggerEventType.CREATE,
    event_object=TriggerEventObject.NODE,
    execution_phase=TriggerExecutionPhase.AFTER,
    statement="UNWIND createdVertices AS node SET node.created_at = LocalDateTime()",
)
db.create_trigger(trigger)

What’s next

In the following weeks, we aim to develop support for running Python functions inside a graph database. This will enable you to set up custom triggers and modules in Python. Stay tuned and in the meantime, why not try out and install GQLAlchemy by executing pip3 install gqlalchemy. Happy coding!

Table of Contents

Continue Reading

We don't have anything related to this article, but check out our blog.