a ridge on the cerebral cortex
For builders using AI across
multiple projects and tools.
Claude Code, Cowork, Codex, Antigravity — none of them remember the others.
Gyrus reads all of them and builds one knowledge base. Plain markdown, local-first, editable.
curl -fsSL https://gyrus.sh/install | bash
One command. Bring your own API key. macOS, Linux & Windows.
Run the install. Gyrus scans your sessions, extracts insights, and merges them into project wikis — automatically.
Found: 51 cowork, 33 claude-code, 53 codex, 132 antigravity Cost estimate: ~$4.11 | Time estimate: ~14 minutes [1/269] claude-code travel-app — 5 thoughts [2/269] cowork safety-alerts — 8 thoughts [3/269] codex clinic-notes — 4 thoughts [4/269] antigravity style-engine — 11 thoughts Merging 115 thoughts into 'safety-alerts'... ✓ Updated 'safety-alerts' v1 Merging 271 thoughts into 'style-engine'... ✓ Updated 'style-engine' v1 Done. 269 sessions, 1226 thoughts, 31 project pages. Cost: ~$4.88 | LLM calls: 266 extract, 32 merge_
What a project page looks like
Consumer app aggregating safety alerts from government agencies into a personalized feed. iOS and Android sharing the same backend. Privacy-first: all user data on-device, only anonymous push tokens stored remotely.
Built from 115 thoughts across claude-code, cowork, codex, and antigravity sessions
Reads Claude Code, Cowork, Codex, and Antigravity sessions from the same machine. Groups by workspace so a Cowork architecture session and a Claude Code build session end up on the same project page.
An LLM you choose picks out decisions, status changes, and project context. Not code diffs or terminal output.
New insights merge into existing wiki pages. You can edit pages by hand anytime — your edits are preserved.
Supported tools
A cron job checks for new sessions on your schedule. No new sessions = no API calls = zero cost. Review your pages periodically — they're drafts, not gospel.
The installer sets up a cron job (or Windows Scheduled Task) at whatever frequency you choose — every 30 minutes, hourly, every 4 hours, or daily. Each run checks for new sessions since the last one. If nothing changed, it exits immediately. No LLM calls, no cost.
Gyrus installs a /gyrus slash command into Claude Code and instruction files for Codex. Your AI tools can query the knowledge base mid-session — context flows both ways.
When there are new sessions: ~$0.01–0.04 per run • Frequency is configurable during install • Self-updates with python3 ingest.py --update
Monday you architect in Cowork. Tuesday you build in Claude Code. Wednesday you debug in Codex. Thursday you refactor in Antigravity. By Friday, none of them know what happened in the others.
What Gyrus captures
What it skips
Each run adds new context. Pages get more useful as you work, but they're LLM-maintained — review them like you'd review a junior's draft.
Week 1
"Pulse is a real-time analytics dashboard"
Week 2
"Pulse targets early-stage startups. Switched to columnar DB for real-time queries"
Week 3
"Pulse's moat is sub-second latency + native payment integration"
Week 4
"Pulse needs demo by April 1. Ship churn predictor or cut scope."
Pages work best for: project status, key decisions, timeline, next steps. They're weaker for precise architecture docs or exact dependency versions — review and edit as needed.
Gyrus runs on your machine. You pay your LLM provider directly. No accounts, no cloud, no middleman.
Thought extraction
~$0.01/session
GPT-4.1-mini, Haiku, Gemini Flash
Knowledge merging
~$0.05/page update
Sonnet, GPT-4.1, Gemini Pro
Typical monthly cost
$5-15
Cloud accounts needed
Zero
Run compare to benchmark models on your own sessions and pick one. Supports Anthropic, OpenAI, and Google models. Swap anytime.
16 models across Anthropic, OpenAI, and Google. Run compare to benchmark them on your own sessions — it tests extraction quality, generates sample wiki pages, and an AI judge grades each model. You choose both extraction and merge models.
Benchmark: 5 sessions, 7 fixtures, scored by Sonnet judge
| Model | Thoughts | Time | Cost/run | Quality |
|---|---|---|---|---|
| gpt-4.1-mini default | 31 | 25s | $0.030 | 9/10 |
| gpt-5.4-nano | 34 | 17s | $0.008 | 7/10 |
| gpt-4.1-nano | 30 | 13s | $0.012 | 7/10 |
| haiku | 27 | 25s | $0.045 | 8/10 |
| sonnet | 24 | 49s | $0.180 | 9/10 |
| gemini-flash | 14 | 23s | $0.010 | 8/10 |
| gemini-lite | 10 | 8s | ~free | 5/10 |
These are our results. Yours will vary — compare runs the same benchmark on your sessions and lets you pick.
LLM-generated docs can be polished but subtly wrong. We built an eval framework to catch that and iteratively improve.
Scored against hand-curated golden fixtures across 5 metrics:
Iterative prompt tuning against 7 golden test fixtures:
Run --eval to test our prompts against your own golden fixtures. Run curate to create them.
Entity grounding
Named entities in wiki pages must trace back to input thoughts. Ungrounded terms are flagged.
Date verification
Every date in the output must exist in the input. No fabricated timestamps.
Confidence calibration
Detects when "exploring" becomes "committed to" or "might" becomes "will" without evidence.
Working patterns and cross-cutting decisions go into ~/.gyrus/me.md. Ideas and brainstorms go into ideas.md. You can review project statuses in status.md and edit them by hand.
Gyrus stores everything as plain markdown in ~/.gyrus/. Sync however you want:
iCloud / Dropbox / Google Drive
Point ~/.gyrus/ to a synced folder
Git
cd ~/.gyrus && git init
Obsidian
Set your vault path to ~/.gyrus/
Notion
Optional adapter via --storage=notion
Run the installer on each machine. Same knowledge base, everywhere.
Good fit
Less useful for
Different approach. Mem0 and OpenMemory store facts as vector embeddings in a database. They're designed as memory APIs for AI apps.
Gyrus produces editable markdown wiki pages organized by project. You can read them, edit them, sync them with any tool. The output is structured documents, not database rows.
Those are static files you write and maintain by hand, scoped to a single repo. Gyrus extracts knowledge from your actual sessions across all your projects and tools automatically. They're complementary — AGENTS.md tells the AI how to behave, Gyrus captures what you've decided and built.
Gyrus currently supports Claude Code, Claude Cowork, OpenAI Codex, and Google Antigravity. More tools added on request — each one just needs a session reader (~30 lines of Python). Contributions welcome.
Gyrus runs entirely on your machine by default. Sessions are sent to your chosen LLM API for extraction (same as using any AI tool), but your knowledge base stays local as plain markdown files. Nothing is stored in any cloud unless you opt in — you can choose to sync via iCloud, Dropbox, Notion, or Git if you want cross-machine access.
Yes, and you should. Pages are LLM-maintained drafts — they can be subtly wrong or overgeneralized. Edit them in any text editor. Gyrus preserves your manual edits during the next merge. You can also set project statuses in status.md.
cd ~/.gyrus && uv run ingest.py --update. Downloads the latest scripts from GitHub. Your knowledge base, config, and API keys are preserved.
curl -fsSL https://gyrus.sh/uninstall | bash. Removes the cron job, Claude Code skill, and ~/.gyrus/ directory. It will warn you to back up your knowledge base first.
Takes 2 minutes. You'll see what it finds.
curl -fsSL https://gyrus.sh/install | bash