a ridge on the cerebral cortex
For builders using AI across
multiple projects and tools.
Claude Code, Cowork, Codex, Antigravity — none of them remember the others.
Gyrus reads all of them and builds one knowledge base. Plain markdown, local-first, editable.
curl -fsSL https://gyrus.sh/install | bash
One command. Bring your own API key. macOS, Linux & Windows.
Run the install. Gyrus scans your sessions, extracts insights, and merges them into project wikis — automatically.
Found: 51 cowork, 33 claude-code, 53 codex, 132 antigravity Cost estimate: ~$4.11 | Time estimate: ~14 minutes [1/269] claude-code travel-app — 5 thoughts [2/269] cowork safety-alerts — 8 thoughts [3/269] codex clinic-notes — 4 thoughts [4/269] antigravity style-engine — 11 thoughts Merging 115 thoughts into 'safety-alerts'... ✓ Updated 'safety-alerts' v1 Merging 271 thoughts into 'style-engine'... ✓ Updated 'style-engine' v1 Done. 269 sessions, 1226 thoughts, 31 project pages. Cost: ~$4.88 | LLM calls: 266 extract, 32 merge_
What a project page looks like
Consumer app aggregating safety alerts from government agencies into a personalized feed. iOS and Android sharing the same backend. Privacy-first: all user data on-device, only anonymous push tokens stored remotely.
Built from 115 thoughts across claude-code, cowork, codex, and antigravity sessions
Reads Claude Code, Cowork, Codex, and Antigravity sessions from the same machine. Groups by workspace so a Cowork architecture session and a Claude Code build session end up on the same project page.
An LLM you choose picks out decisions, status changes, and project context. Not code diffs or terminal output.
New insights merge into existing wiki pages. You can edit pages by hand anytime — your edits are preserved.
Supported tools
A cron job pulls the latest from your GitHub sync, ingests new sessions, and pushes back. No new sessions = no API calls = zero cost. Review pages periodically — they're drafts, not gospel.
The installer sets up a cron job (or Windows Scheduled Task) at your chosen frequency — every 30 minutes, hourly, every 4 hours, or daily. Each run git pulls the latest, ingests anything new, and pushes back. If nothing changed, it exits immediately. No LLM calls, no cost.
Gyrus installs a /gyrus slash command into Claude Code and instruction files for Codex. Your AI tools can query the knowledge base mid-session — context flows both ways.
When there are new sessions: ~$0.01–0.04 per run • Frequency is configurable during install • Self-updates with gyrus update • Diagnose with gyrus doctor
Monday you architect in Cowork. Tuesday you build in Claude Code. Wednesday you debug in Codex. Thursday you refactor in Antigravity. By Friday, none of them know what happened in the others.
What Gyrus captures
What it skips
Each run adds new context. Pages get more useful as you work, but they're LLM-maintained — review them like you'd review a junior's draft.
Week 1
"Pulse is a real-time analytics dashboard"
Week 2
"Pulse targets early-stage startups. Switched to columnar DB for real-time queries"
Week 3
"Pulse's moat is sub-second latency + native payment integration"
Week 4
"Pulse needs demo by April 1. Ship churn predictor or cut scope."
Pages work best for: project status, key decisions, timeline, next steps. They're weaker for precise architecture docs or exact dependency versions — review and edit as needed.
Gyrus runs on your machine. You pay your LLM provider directly. No accounts, no cloud, no middleman.
Thought extraction
~$0.01/session
GPT-4.1-mini, Haiku, Gemini Flash
Knowledge merging
~$0.05/page update
Sonnet, GPT-4.1, Gemini Pro
Typical monthly cost
$5-15
Cloud accounts needed
Zero
Run compare to benchmark models on your own sessions and pick one. Supports Anthropic, OpenAI, and Google models. Swap anytime.
16 models across Anthropic, OpenAI, and Google. Run compare to benchmark them on your own sessions — it tests extraction quality, generates sample wiki pages, and an AI judge grades each model. You choose both extraction and merge models.
Benchmark: 5 sessions, 7 fixtures, scored by Sonnet judge
| Model | Thoughts | Time | Cost/run | Quality |
|---|---|---|---|---|
| gpt-4.1-mini default | 31 | 25s | $0.030 | 9/10 |
| gpt-5.4-nano | 34 | 17s | $0.008 | 7/10 |
| gpt-4.1-nano | 30 | 13s | $0.012 | 7/10 |
| haiku | 27 | 25s | $0.045 | 8/10 |
| sonnet | 24 | 49s | $0.180 | 9/10 |
| gemini-flash | 14 | 23s | $0.010 | 8/10 |
| gemini-lite | 10 | 8s | ~free | 5/10 |
These are our results. Yours will vary — compare runs the same benchmark on your sessions and lets you pick.
LLM-generated docs can be polished but subtly wrong. We built an eval framework to catch that and iteratively improve.
Scored against hand-curated golden fixtures across 5 metrics:
Iterative prompt tuning against 7 golden test fixtures:
Run --eval to test our prompts against your own golden fixtures. Run curate to create them.
Entity grounding
Named entities in wiki pages must trace back to input thoughts. Ungrounded terms are flagged.
Date verification
Every date in the output must exist in the input. No fabricated timestamps.
Confidence calibration
Detects when "exploring" becomes "committed to" or "might" becomes "will" without evidence.
Working patterns and cross-cutting decisions go into ~/.gyrus/me.md. Ideas and brainstorms go into ideas.md. You can review project statuses in status.md and edit them by hand.
Gyrus syncs via a private GitHub repo you own. Every run pulls the latest, ingests new sessions, and pushes. Zero config once set up.
iCloud / Dropbox / Google Drive are actively not recommended — they evict and lock files, silently breaking ingest.
✓ First machine
curl -fsSL https://gyrus.sh/install | bash # Installer offers GitHub setup # or run: gyrus init
✓ Second machine
curl -fsSL https://gyrus.sh/install | bash gyrus init --clone <your-repo-url>
Obsidian
Set your vault path to ~/.gyrus/ — pages are plain markdown
Notion
Optional adapter via --storage=notion
Auto-sync is non-fatal — network hiccups never block local work.
Good fit
Less useful for
Different approach. Mem0 and OpenMemory store facts as vector embeddings in a database. They're designed as memory APIs for AI apps.
Gyrus produces editable markdown wiki pages organized by project. You can read them, edit them, sync them with any tool. The output is structured documents, not database rows.
Those are static files you write and maintain by hand, scoped to a single repo. Gyrus extracts knowledge from your actual sessions across all your projects and tools automatically. They're complementary — AGENTS.md tells the AI how to behave, Gyrus captures what you've decided and built.
Gyrus currently supports Claude Code, Claude Cowork, OpenAI Codex, and Google Antigravity. More tools added on request — each one just needs a session reader (~30 lines of Python). Contributions welcome.
Gyrus runs entirely on your machine by default. Sessions are sent to your chosen LLM API for extraction (same as using any AI tool), but your knowledge base stays local as plain markdown files. Cross-machine sync is handled by a private GitHub repo you own — your data never touches any third-party cloud.
Yes, and you should. Pages are LLM-maintained drafts — they can be subtly wrong or overgeneralized. Edit them in any text editor. Gyrus preserves your manual edits during the next merge. You can also set project statuses in status.md.
Run gyrus doctor. It checks storage location, ingest freshness, scheduled-job status, GitHub sync, API keys, session sources, backlog, and lockfile in one pass — then tells you the single most likely root cause. Add --fix to auto-patch the safe ones (removes stale locks, installs cron, runs brctl download, initializes git, syncs). Every gyrus run also prints a heartbeat line so silent-failure modes are impossible to miss.
gyrus update. Downloads the latest code from GitHub. Your knowledge base, config, and API keys are preserved.
curl -fsSL https://gyrus.sh/uninstall | bash. Removes the cron job, Claude Code skill, and ~/.gyrus/ directory. It will warn you to back up your knowledge base first.
Takes 2 minutes. You'll see what it finds.
curl -fsSL https://gyrus.sh/install | bash