Skip to main content

Knowledge Graph Memory

Stores information as entities (nodes) and relationships (edges) in a graph database. Unlike vector store memory which finds "similar" content, graph memory can traverse relationships — enabling multi-hop reasoning like "Alice works at Acme" + "Acme is in New York" = "Alice is in New York".


Structure

An LLM extracts entities and relationships from text, storing them as nodes and edges. At query time, relevant entities are identified, the graph is traversed to pull connected information, and the resulting subgraph is provided as context.


Mechanism

  • LLM extracts entities (people, orgs, concepts) and relationships from text
  • Entities become nodes with properties; relationships become typed edges
  • Temporal metadata tracks when facts became true or were superseded
  • Incremental updates — new information merges with existing graph
  • Storage: Neo4j, Amazon Neptune, or in-memory graph libraries

Key Characteristics

  • Multi-hop reasoning — traverses chains of relationships that flat retrieval misses
  • Structured knowledge — entities and relationships have types and properties
  • Temporal awareness — can track when facts changed over time
  • Complex to build — entity extraction, resolution, and graph construction are hard problems
  • Higher infrastructure cost — requires a graph database and extraction pipeline

When to Use

  • Your domain has rich relationships between entities (people, orgs, systems, concepts)
  • Questions require connecting multiple pieces of information (multi-hop reasoning)
  • Temporal tracking matters — you need to know when facts changed
  • Vector similarity alone gives poor results because related content isn't semantically similar
  • You're building on top of an existing knowledge graph or structured data source