Gyrus

a ridge on the cerebral cortex

For builders using AI across
multiple projects and tools.

Claude Code, Cowork, Codex, Antigravity — none of them remember the others. Gyrus reads all of them and builds one knowledge base. Plain markdown, local-first, editable.

$ curl -fsSL https://gyrus.sh/install | bash

One command. Bring your own API key. macOS, Linux & Windows.

See it work

Run the install. Gyrus scans your sessions, extracts insights, and merges them into project wikis — automatically.

gyrus init
Found: 51 cowork, 33 claude-code, 53 codex, 132 antigravity
  Cost estimate: ~$4.11 | Time estimate: ~14 minutes

[1/269] claude-code travel-app5 thoughts
[2/269] cowork safety-alerts8 thoughts
[3/269] codex clinic-notes4 thoughts
[4/269] antigravity style-engine11 thoughts

Merging 115 thoughts into 'safety-alerts'...
   Updated 'safety-alerts' v1
Merging 271 thoughts into 'style-engine'...
   Updated 'style-engine' v1

Done. 269 sessions, 1226 thoughts, 31 project pages.
  Cost: ~$4.88 | LLM calls: 266 extract, 32 merge_

What a project page looks like

safety-alerts.md

SafetyAlerts

active pre-launch Priority: P1

Overview

Consumer app aggregating safety alerts from government agencies into a personalized feed. iOS and Android sharing the same backend. Privacy-first: all user data on-device, only anonymous push tokens stored remotely.

Key Decisions (from 4 different tools)

  • 03-25Killed scanner tab before App Store resubmission cowork
  • 03-28Fixed push notification key mismatch across platforms claude-code
  • 03-30Switched to direct Play Billing for donations antigravity
  • 04-01Dynamic sitemap via edge worker for SEO pages codex

Sources

Built from 115 thoughts across claude-code, cowork, codex, and antigravity sessions

How it works

1

Scans all tools

Reads Claude Code, Cowork, Codex, and Antigravity sessions from the same machine. Groups by workspace so a Cowork architecture session and a Claude Code build session end up on the same project page.

2

Extracts

An LLM you choose picks out decisions, status changes, and project context. Not code diffs or terminal output.

3

Merges

New insights merge into existing wiki pages. You can edit pages by hand anytime — your edits are preserved.

Supported tools

Claude Code Claude Cowork OpenAI Codex Google Antigravity

Runs automatically

A cron job checks for new sessions on your schedule. No new sessions = no API calls = zero cost. Review your pages periodically — they're drafts, not gospel.

Scheduled sync

The installer sets up a cron job (or Windows Scheduled Task) at whatever frequency you choose — every 30 minutes, hourly, every 4 hours, or daily. Each run checks for new sessions since the last one. If nothing changed, it exits immediately. No LLM calls, no cost.

Tool skills

Gyrus installs a /gyrus slash command into Claude Code and instruction files for Codex. Your AI tools can query the knowledge base mid-session — context flows both ways.

When there are new sessions: ~$0.01–0.04 per run • Frequency is configurable during install • Self-updates with python3 ingest.py --update

The problem it solves

Monday you architect in Cowork. Tuesday you build in Claude Code. Wednesday you debug in Codex. Thursday you refactor in Antigravity. By Friday, none of them know what happened in the others.

What Gyrus captures

  • cowork "Decided to cut the scanner feature before App Store resubmission"
  • claude "Renamed package across 155 files — rebrand complete"
  • codex "OAuth token expiration causing 401s on /login endpoint"
  • antigravity "Tech stack decided: Next.js on edge hosting + managed Postgres"

What it skips

  • Code diffs, file edits, terminal commands
  • "Let me check that file" / "sounds good"
  • CSS changes, dependency updates, config tweaks

Pages evolve over time

Each run adds new context. Pages get more useful as you work, but they're LLM-maintained — review them like you'd review a junior's draft.

Week 1

"Pulse is a real-time analytics dashboard"

Week 2

"Pulse targets early-stage startups. Switched to columnar DB for real-time queries"

Week 3

"Pulse's moat is sub-second latency + native payment integration"

Week 4

"Pulse needs demo by April 1. Ship churn predictor or cut scope."

Pages work best for: project status, key decisions, timeline, next steps. They're weaker for precise architecture docs or exact dependency versions — review and edit as needed.

What it costs

Gyrus runs on your machine. You pay your LLM provider directly. No accounts, no cloud, no middleman.

Thought extraction

~$0.01/session

GPT-4.1-mini, Haiku, Gemini Flash

Knowledge merging

~$0.05/page update

Sonnet, GPT-4.1, Gemini Pro

Typical monthly cost

$5-15

Cloud accounts needed

Zero

Run compare to benchmark models on your own sessions and pick one. Supports Anthropic, OpenAI, and Google models. Swap anytime.

You pick the model

16 models across Anthropic, OpenAI, and Google. Run compare to benchmark them on your own sessions — it tests extraction quality, generates sample wiki pages, and an AI judge grades each model. You choose both extraction and merge models.

Benchmark: 5 sessions, 7 fixtures, scored by Sonnet judge

Model Thoughts Time Cost/run Quality
gpt-4.1-mini default 31 25s $0.030 9/10
gpt-5.4-nano 34 17s $0.008 7/10
gpt-4.1-nano 30 13s $0.012 7/10
haiku 27 25s $0.045 8/10
sonnet 24 49s $0.180 9/10
gemini-flash 14 23s $0.010 8/10
gemini-lite 10 8s ~free 5/10

These are our results. Yours will vary — compare runs the same benchmark on your sessions and lets you pick.

We measure quality

LLM-generated docs can be polished but subtly wrong. We built an eval framework to catch that and iteratively improve.

Extraction quality

Scored against hand-curated golden fixtures across 5 metrics:

Recall
0.93
Precision
0.85
Noise rejection
0.88
Project attribution
0.95
Composite 0.90

How we got here

Iterative prompt tuning against 7 golden test fixtures:

Baseline0.59
+ match calibration0.77
+ idea classification0.83
+ few-shot examples0.91
+ selectivity tuning0.90 stable

Run --eval to test our prompts against your own golden fixtures. Run curate to create them.

Built-in hallucination detection

Entity grounding

Named entities in wiki pages must trace back to input thoughts. Ungrounded terms are flagged.

Date verification

Every date in the output must exist in the input. No fabricated timestamps.

Confidence calibration

Detects when "exploring" becomes "committed to" or "might" becomes "will" without evidence.

Not just projects

Working patterns and cross-cutting decisions go into ~/.gyrus/me.md. Ideas and brainstorms go into ideas.md. You can review project statuses in status.md and edit them by hand.

Sync across machines

Gyrus stores everything as plain markdown in ~/.gyrus/. Sync however you want:

iCloud / Dropbox / Google Drive

Point ~/.gyrus/ to a synced folder

Git

cd ~/.gyrus && git init

Obsidian

Set your vault path to ~/.gyrus/

Notion

Optional adapter via --storage=notion

Run the installer on each machine. Same knowledge base, everywhere.

Who it's for

Good fit

  • Founders and solo builders juggling multiple projects
  • People using multiple AI coding tools across machines
  • Anyone who makes strategic decisions in AI chats and then forgets them

Less useful for

  • One repo, one tool, no context loss
  • Teams that already maintain disciplined docs
  • Users expecting exact, source-of-truth technical documentation

FAQ

Different approach. Mem0 and OpenMemory store facts as vector embeddings in a database. They're designed as memory APIs for AI apps.

Gyrus produces editable markdown wiki pages organized by project. You can read them, edit them, sync them with any tool. The output is structured documents, not database rows.

Those are static files you write and maintain by hand, scoped to a single repo. Gyrus extracts knowledge from your actual sessions across all your projects and tools automatically. They're complementary — AGENTS.md tells the AI how to behave, Gyrus captures what you've decided and built.

Gyrus currently supports Claude Code, Claude Cowork, OpenAI Codex, and Google Antigravity. More tools added on request — each one just needs a session reader (~30 lines of Python). Contributions welcome.

Gyrus runs entirely on your machine by default. Sessions are sent to your chosen LLM API for extraction (same as using any AI tool), but your knowledge base stays local as plain markdown files. Nothing is stored in any cloud unless you opt in — you can choose to sync via iCloud, Dropbox, Notion, or Git if you want cross-machine access.

Yes, and you should. Pages are LLM-maintained drafts — they can be subtly wrong or overgeneralized. Edit them in any text editor. Gyrus preserves your manual edits during the next merge. You can also set project statuses in status.md.

cd ~/.gyrus && uv run ingest.py --update. Downloads the latest scripts from GitHub. Your knowledge base, config, and API keys are preserved.

curl -fsSL https://gyrus.sh/uninstall | bash. Removes the cron job, Claude Code skill, and ~/.gyrus/ directory. It will warn you to back up your knowledge base first.

Try it on your sessions

Takes 2 minutes. You'll see what it finds.

$ curl -fsSL https://gyrus.sh/install | bash