The Case for a Repo-Centric Backlog CLI
Why I moved my engineering roadmap to a local SQLite database—isolated by project—instead of using Linear, Notion, or scattered Markdown files.
An architect’s roadmap for multiple projects usually ends up in one of three places, all of them wrong: a TODO.md that hasn’t been updated in weeks, a GitHub issue buried without priority, or an AI agent chat that clears its context every morning. Every time you open a repository, the archaeology starts over—and rehydrating that context into a prompt costs tokens and time that no one accounts for.
I was managing more than 20 active repositories across my consulting practice and side experiments. I didn’t want to spin up a private Jira or ClickUp instance, nor did I want another Markdown file pretending I was “managing” things. So, I wrote a Rust CLI called local-backlog: a single binary that serves all repositories while maintaining strict per-project isolation.
The Concrete Pain
I operate in three modes simultaneously. On consulting projects, I perform “skeleton walking”—designing the architecture, scaffolding, ADRs, and pipelines—before handing off execution to the team’s juniors and mid-levels. For stable products, I act as an advisor to the maintenance team. In parallel, I run my solo consulting practice, often supported by AI agents (Claude Code, Codex, Gemini).
The stack of repositories isn’t all mine to develop, but it’s all mine to remember. For every project, I need to reconstruct: where I left off in the ADR review, which decision was pending with the tech lead, what technical debt I spotted in the last pass, or that phase-two plan I promised to draft. This isn’t a team backlog—teams have Jira, Linear, or a shared board. This is the “brain dump” of an orchestrator who needs to jump back into context without re-reading three Slack channels.
The problem isn’t the number of tasks; it’s that no loose format survives an agent restart or a week away from the repo. Every return starts with “catch me up on what’s in progress”—either with myself or with the agent. Then I’m copying TODO.md, pasting issues, and summarizing yesterday’s wins. It’s the editorial equivalent of rebuilding a cache from scratch on every request.
Two Backlogs That Shouldn’t Mix
In established teams, there are usually two backlogs for the same codebase. The product backlog—epics, features, and priorities discussed in refinements—is public and collective. The engineering roadmap—technical debt, pending refactors, architectural risks, and “parking lot” ideas—usually stays in a side channel between tech leads, if it doesn’t just live in the lead’s head.
This boundary isn’t new. Martin Fowler formalized parts of it in the Technical Debt Quadrant: a decision should be recorded long before it becomes a ticket. Will Larson, in Staff Engineer, treats the “technical roadmap” as an artifact separate from the product roadmap—work that Staff and Tech Leads carry even when no board asks for it.
This holds true even without formal leadership. Every senior engineer operates in two layers: what’s on the sprint board and the technical context they carry personally. Sometimes you can promote an item from your layer directly to the project backlog; other times, you bring it to a PM during a ceremony. In both cases, the material needs to be recorded somewhere that supports filtering by project, priority, and horizon.
This is where cadence comes in: daily, weekly, monthly, quarterly. Each horizon requires a different lens: today is status=doing; monthly planning is a query for priority and tag; a quarterly retro reads a deterministic timeline of events. Without a structured filter, your roadmap collapses into a narrative document that no one actually re-reads.
local-backlog is this second layer materialized on disk. It doesn’t compete with Jira—it feeds it.
The Three Costs
Before building the CLI, I weighed the alternatives. None are inherently bad, but each failed on a specific axis for my workflow.
- Cost: Monthly fees + context-switching overhead from jumping into the browser. - Great UI, but poor as a programmable source for agents. - External silo: Agents need API keys, rate limits, and auth. - Backup depends entirely on the vendor.
- Cost: Zero dollars in tools, high discipline. - No schema: Priority becomes a
[P1]prefix, tags become emojis, and ordering is just typing order. - Filtering for “open bugs only” requiresgrepand human parsing. - Exporting to an LLM means pasting the entire file.
The third option—a local SQLite database with a custom CLI—has the upfront cost of building the tool (and, for me, learning Rust). You pay that cost once. After that, ~/.local-backlog/backlog.db is just data: versionable via dotfiles and queryable via SQL whenever the CLI doesn’t support a specific report.
The Implementation
local-backlog is a single binary called backlog. All state lives in ~/.local-backlog/ (overridable via LOCAL_BACKLOG_HOME). Nothing is stored in the repository itself; no .local-backlog.db files polluting your git status. The link between a folder and a project lives in a global registry; running backlog init in a new folder registers that path as a tenant.
The Quickstart is straightforward:
cd ~/code/my-project
backlog init --yes
backlog add "Refactor auth middleware" \
--type feature --tag security --priority 50
backlog list
backlog list --format json
backlog show 1
backlog done 1
backlog export --format markdown
Every command resolves the project tenant based on your current working directory (CWD). There is no --all-projects flag for data commands; entering the directory defines the scope.
Decisions That Matter
Strict Isolation, Enforced by Triggers
ADR-0001 details this choice: every data query filters by a project_id inferred from the CWD. SQL triggers on task_tags, task_links, and parent_id block cross-project inserts or updates. An #auth tag in two different repos won’t collide—tags.(project_id, name) is unique per tenant.
I could have handled this in the application logic, but I didn’t. A forgotten WHERE clause silently breaches isolation; a trigger fails loudly. It’s cheap defense-in-depth for a tool I maintain myself, where a leak between repos is exactly the kind of mistake I’d make during a rushed refactor.
Atomic tasks, Satellite Tables for Everything Else
ADR-0002 defines tasks with a minimal core (id, title, status, priority, type, parent_id, timestamps). Everything else—tags, EAV (Entity-Attribute-Value) attributes, typed links (blocks, relates, duplicates), and append-only events—lives in satellite tables.
The trap: EAV is seductive. You want to throw everything into task_attributes(key, value) and never run another migration. My rule is explicit: an EAV key only becomes a column in the tasks table when it appears in ≥80% of active tasks or becomes a frequent filter. “Promotion” is a conscious decision, not an automatic one.
The Output Contract
ADR-0004 saved me the most headaches: Data goes to stdout, while messages (logs, prompts, errors) go to stderr. Every read command has supported --format=table|json since day one. JSON output uses a { "schema_version": N, "data": ... } envelope. There are no raw println! calls in subcommands; everything flows through an output helper.
The trade-off: Annoying discipline during the first week, but reliable pipes forever. backlog list --format json | jq '.data[] | select(.priority > 40)' won’t break just because I turned on --verbose.
AI as a Consumer, Not an Owner
The real “killer feature” of this project is backlog export --format json. It provides structured output, filtering by status/tag/type, and deterministic ordering. Two runs against an unchanged database produce identical bytes—allowing for snapshot testing and clean Git diffs.
{
"schema_version": 1,
"project": { "id": 1, "name": "proj", "root_path": "...", "archived_at": null },
"tasks": [
{
"id": 42,
"title": "refactor auth middleware",
"status": "doing",
"priority": 50,
"type": "feature",
"tags": ["security", "debt"],
"attributes": [{ "key": "jira", "value": "ABC-123" }],
"links_out": [{ "from_id": 42, "to_id": 17, "kind": "blocks" }],
"events": []
}
]
}
In practice, the workflow is: An agent starts the session by running backlog export --format markdown --status todo,doing and injecting it into the initial context. At the end of the session, if a task is finished, the agent runs backlog done or backlog edit. The AI consumes and updates; it doesn’t have to guess or reinterpret. SQLite remains the single source of truth.
Previously, every session began with a few minutes of “explain what’s in progress” and a paragraph pasted from three different sources. Now, it starts with a command and deterministic output. It’s less prompt rehydration and less variance between my memory and reality.
What I Didn’t Do
- Cross-machine sync: My plan is to just sync
~/.local-backlog/via dotfiles and Git. I don’t need real-time conflict resolution, CRDTs, or a server. On two active machines, manual resolution is fine; on four, it might become a problem. - Web UI: Doesn’t exist. Might never exist. The terminal is my environment;
backlog show 42andbacklog events 42give me everything I need. - Multi-user support: This tool assumes a single human. Teams have Jira or Linear;
local-backlogis a personal brain dump for an orchestrator, not a shared board. Adding anauthor_idwould be trivial, but there’s no use case for it.
Without multiple fronts running in parallel, a disciplined TODO.md is usually enough. But a tool like this pays for its existence the moment the friction of recovering context becomes a daily occurrence.
Code
Check out the code at github.com/OliveiraCleidson/local-backlog.
Canonical ADRs are in docs/adr/pt-BR/, with translations in en and es-AR in the same commit.
Discussion
This blog has no comments. To discuss this post: