Replace recursive CTEs with native Neo4j graph traversal. Same GBrain API โ pages, links, traversal, search โ backed by index-free adjacency.
โ Live at graphbrain.belweave.ai v0.2.0 MIT๐ฅ๏ธ Try the Web Dashboard โ โ provision brains, browse pages, explore graphs.
GBrain stores its knowledge graph in Postgres โ links are rows in a links table, traversal uses recursive CTEs. GraphBrain swaps the storage layer for Neo4j while keeping the exact same API. Every GBrain operation works unchanged, but now:
O(1) per hop. BFS to depth 10 completes in single-digit milliseconds.
Each brain is a separate Neo4j database. No shared namespace, no query filtering, no cross-brain leakage.
Links are real Neo4j relationships. Typed, directed, with context โ exactly how you think about them.
Cypher UNWIND creates hundreds of links in one query. No more sequential INSERTs.
Every brain gets its own sk_* key. Cross-brain access returns 401.
Deploy on your own hardware with Cloudflare Tunnel. Auto-SSL, zero port forwarding.
Already have GBrain? Point it at GraphBrain's REST API in 2 steps.
curl -X POST https://graphbrain.belweave.ai/v1/brains \
-H "Content-Type: application/json" \
-d '{"name":"my-brain"}'
You'll get back a brain_id and api_key. Save both.
# Set your API key as an env var
export GBRAIN_GRAPH_API_KEY=sk_your_key_here
# Point GBrain at your GraphBrain instance
gbrain init --engine graphbrain \
--url https://graphbrain.belweave.ai/v1/brain_YOUR_BRAIN_ID
That's it. GBrain's CLI, MCP server, and all existing scripts now use Neo4j for graph operations.
If you're an AI agent (Claude, Codex, Cursor, etc.) setting up GraphBrain for your user, copy-paste this:
# 1. Install GBrain if not present
git clone https://github.com/garrytan/gbrain ~/gbrain && cd ~/gbrain && bun install
# 2. Create a GraphBrain brain
RESP=$(curl -s -X POST https://graphbrain.belweave.ai/v1/brains \
-H "Content-Type: application/json" \
-d '{"name":"my-knowledge-graph"}')
BID=$(echo "$RESP" | python3 -c "import sys,json; print(json.load(sys.stdin)['brain_id'])")
KEY=$(echo "$RESP" | python3 -c "import sys,json; print(json.load(sys.stdin)['api_key'])")
# 3. Initialize GBrain pointing at GraphBrain
cd ~/gbrain
GBRAIN_GRAPH_API_KEY=$KEY gbrain init --engine graphbrain \
--url "https://graphbrain.belweave.ai/v1/$BID"
# 4. Start using it
echo "Brain ID: $BID"
echo "API Key: $KEY"
# Already have GBrain set up with Postgres? Just point it at GraphBrain:
export GBRAIN_GRAPH_API_KEY=sk_your_key_here
gbrain init --engine graphbrain \
--url https://graphbrain.belweave.ai/v1/brain_YOUR_BRAIN_ID
# Your existing scripts, MCP server, and cron jobs work unchanged.
# Graph operations (links, traversal) now use Neo4j.
# Verify everything works
BASE="https://graphbrain.belweave.ai"
KEY="sk_your_key"
BID="brain_your_id"
# Health check
curl -s "$BASE/health"
# Create pages
curl -s -X PUT "$BASE/v1/$BID/pages/alice" \
-H "Content-Type: application/json" -H "X-API-Key: $KEY" \
-d '{"title":"Alice Chen","type":"person","content":"Engineer"}'
# Create links
curl -s -X POST "$BASE/v1/$BID/links" \
-H "Content-Type: application/json" -H "X-API-Key: $KEY" \
-d '{"from_slug":"alice","to_slug":"bob","link_type":"knows"}'
# Traverse
curl -s -X POST "$BASE/v1/$BID/traverse" \
-H "Content-Type: application/json" -H "X-API-Key: $KEY" \
-d '{"start_slug":"alice","depth":2}'
# Search
curl -s "$BASE/v1/$BID/search?q=engineer" -H "X-API-Key: $KEY"
All brain endpoints require X-API-Key: sk_... header. Cross-brain access returns 401.
| Method | Path | Description |
|---|---|---|
POST | /v1/brains | Create a new brain (returns ID + key) |
GET | /v1/brains | List all brains |
PUT | /v1/:id/pages/:slug | Create or update a page |
GET | /v1/:id/pages/:slug | Get a page |
GET | /v1/:id/pages | List pages (?type=person&limit=50) |
POST | /v1/:id/links | Create a typed edge |
POST | /v1/:id/links/batch | Batch create edges (UNWIND) |
GET | /v1/:id/links/:slug | Outgoing edges from a page |
GET | /v1/:id/backlinks/:slug | Incoming edges to a page |
POST | /v1/:id/traverse | BFS traversal (depth, direction, type filter) |
GET | /v1/:id/graph | Full graph for visualization |
GET | /v1/:id/search?q= | Full-text search across pages |
GET | /v1/:id/stats | Brain statistics |
POST | /v1/:id/timeline | Add timeline entry |
| Version | Item | Status |
|---|---|---|
| v0.1 | Core engine โ pages, links, traversal, search, stats, timeline | โ Done |
| v0.1 | GBrain adapter โ PR #594 accepted, 25/25 tests pass | โ Done |
| v0.1 | One-click server setup script | โ Done |
| v0.1 | Per-brain API keys with cross-brain auth enforcement | โ Done |
| v0.1 | BrainBench: 258/261 (+10 over stock GBrain) | โ Done |
| v0.2 | Persistent API key storage (SQLite, survives restarts) | โ Done |
| v0.2 | Rate limiting โ per-brain 100/min + per-IP 10/min | โ Done |
| v0.2 | Real /health endpoint โ Neo4j + keystore checks, 503 on failures | โ Done |
| v0.2 | Smoke tests โ full pipeline (provision โ CRUD โ traverse โ delete) | โ Done |
| v0.2 | GBrain CLI plugin โ gbrain init --engine graphbrain | โ Done |
| v0.3 | Web dashboard โ browser-based brain management (Vercel) | โ Planned |
| v0.3 | Interactive graph visualization (D3/Three.js) | โ Planned |
| v0.3 | Migration tooling (Postgres GBrain โ Neo4j) | โ Planned |
| v0.4 | Horizontal scaling โ multiple API instances + load balancer | โ Planned |
| v0.4 | Neo4j read replicas | โ Planned |
curl -fsSL https://raw.githubusercontent.com/pkyanam/graphbrain/main/scripts/setup-server.sh | bash
Works on any Ubuntu/Debian server with Docker. Installs Bun, Docker, Neo4j, GraphBrain, and creates a Cloudflare Tunnel in ~5 minutes.
git clone https://github.com/pkyanam/graphbrain
cd graphbrain
docker compose up -d
# GraphBrain: http://localhost:3000
# Neo4j Browser: http://localhost:7474 (neo4j / graphbrain-dev)