Every major AI company has a chat interface. Some have plugins, canvas views, artifact rendering, voice input. The interfaces are getting richer.
And yet, the AI workflows that have actually improved my productivity the most are CLI-based. I want to explain why.
The Composition Problem
Chat interfaces are islands. What you do in Claude.ai stays in Claude.ai. You can't pipe output to another tool, log results, or integrate with your existing scripts without copy-pasting.
The terminal is not an island. It's the integration point for everything else on your machine.
# Chat interface version:
# 1. Open browser
# 2. Navigate to claude.ai
# 3. Paste your error message
# 4. Read response
# 5. Copy what you need
# 6. Switch back to terminal
# Terminal version:
pbpaste | llm "Explain this error and suggest a fix" | pbcopy
# Or:
cat error.log | fabric --pattern explain | tee explanation.txt
The terminal version is five seconds. The browser version is five minutes, interrupted, context-switching. Multiplied across a day of development, this matters.
Scriptability
Anything in the terminal is scriptable. That means AI operations become repeatable automations:
# Summarize all PRs merged this week
gh pr list --state merged --json title,body \
--search "merged:>$(date -d '7 days ago' +%Y-%m-%d)" | \
llm "Summarize these PRs for the weekly engineering update"
# Review every modified file before committing
git diff --name-only HEAD | \
xargs -I{} sh -c 'echo "=== {} ===" && cat {}' | \
llm "Review these changes for issues before commit"
You can't script a chat interface. You can script anything in the terminal.
The History and Audit Trail
history is a permanent record of everything you ran. Tools like simonw/llm log every prompt and response to a local SQLite database. Every AI interaction becomes auditable, searchable, and reproducible.
llm logs list # See every query you've ever run
llm logs search "error handling" # Find previous answers
Chat interfaces give you a conversation history that's tied to a web interface and might disappear with your account. Local logs belong to you.
Composability with Real Tools
The real power comes from chaining AI with non-AI tools:
# Get AI help understanding a specific git blame
git log --follow -p src/payment/processor.py | \
head -200 | \
llm "Summarize the history of changes to this file and why they were made"
# Generate test cases from an API spec
cat openapi.yaml | \
llm "Generate pytest test cases for these endpoints" > tests/test_api.py
# Convert a data file
cat messy_data.csv | \
llm "Convert this CSV to JSON, cleaning up the column names" > clean_data.json
Each of these chains is something you can't replicate in a chat interface without a lot of copy-pasting. In the terminal, they're one-liners.
The Latency of Attention
Switching to a browser window and back has a cost that's easy to underestimate. Context switching is expensive. Every time you break flow to open a chat interface, you pay that cost.
CLI tools that live in your terminal eliminate the switch. The AI is where you already are.
This is also why editor integrations (GitHub Copilot, Cursor) are so valuable — same principle. The AI comes to your context rather than you going to the AI's context.
The Chat Interface Has Its Place
This isn't an argument that chat interfaces are bad. For exploratory conversations, complex multi-turn problem solving, or anything where you're working through an idea interactively — chat is the right interface.
The point is narrower: for repetitive tasks, automations, and anything that benefits from integration with other tools, the terminal is better. Most developers don't use CLI-based AI tools because they require more setup, and the chat interface is right there. But the setup investment pays off quickly.
Start small: install llm (pip install llm), configure your API key, and spend a week running your questions through the CLI instead of the browser. You'll find your own use cases — and most of them will stay CLI because the composability is that useful.