Getting Started with Hermes Agent
Install Hermes Agent, configure your first LLM provider, and run your first conversation — all in under 10 minutes.
TLDR: Hermes Agent is an open-source AI agent framework by Nous Research. This Hermes tutorial covers the full setup process. Install with
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash, set up a provider withhermes setup, and start chatting withhermes. This guide walks through every step.
What is Hermes Agent?
Hermes Agent is an open-source AI agent framework by Nous Research. Think of it as a self-improving, provider-agnostic AI assistant that lives in your terminal, on your messaging platforms, and in your IDE — capable of browsing the web, running commands, writing files, and remembering context across sessions.
It belongs to the same category as Claude Code (Anthropic) and Codex (OpenAI) — autonomous agents that use tool calling to interact with your system. What makes Hermes different is its skills system (self-improving procedures), persistent memory across sessions, multi-platform gateway (Telegram, Discord, Slack, etc.), and provider-agnostic design (20+ LLM providers).
Key Takeaways
- One-line install via curl — no complicated setup
- Works with 20+ LLM providers (OpenRouter, Anthropic, OpenAI, DeepSeek, local models)
- Fully interactive CLI with tool-use capabilities (terminal, file, web, browser)
- Sessions are saved and resumable
- Self-diagnostic:
hermes doctorcatches most setup issues
Installation
The fastest way to install Hermes is the one-liner:
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash
This downloads the latest release, sets up a Python virtual environment, and puts the hermes binary on your PATH.
After installation, verify it works:
hermes --version
If you see a version number, you’re good. If not, make sure ~/.local/bin is in your PATH:
export PATH="$HOME/.local/bin:$PATH"
Add that to your ~/.bashrc or ~/.zshrc to make it permanent.
Alternative Installation Methods
Via pip:
pip install hermes-agent
From source:
git clone https://github.com/NousResearch/hermes-agent.git
cd hermes-agent
pip install -e .
Setting Up a Provider
Hermes is provider-agnostic — it works with 20+ LLM providers. You need at least one API key to get started.
Recommended: Setup Wizard
hermes setup
This walks you through:
- Model & Provider — pick your LLM provider and model (OpenRouter, Anthropic, OpenAI, DeepSeek, etc.)
- Terminal — confirm shell backend (local, SSH, or Docker)
- Tools — confirm default toolset selection
- Agent — default settings work fine to start
Manual: Just Set an API Key
# OpenRouter (recommended — one key, many models)
export OPENROUTER_API_KEY="sk-or-v1-..."
# Or Anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
# Or OpenAI
export OPENAI_API_KEY="sk-..."
Save it permanently:
echo 'OPENROUTER_API_KEY="sk-or-v1-..."' >> ~/.hermes/.env
Switch Models Anytime
hermes model
This opens an interactive picker. You can also set it directly in the CLI configuration:
hermes config set model.default "anthropic/claude-sonnet-4"
hermes config set model.provider "openrouter"
Your First Chat
Run interactive mode:
hermes
Try asking something practical:
> How large is the current directory, in human-readable format?
Hermes will run a shell command (du -sh .), read the output, and return a clean answer. This demonstrates the tool-use workflow: the agent reasons about your request, calls tools (terminal, file, web, etc.), and presents the results.
Quick Query Mode
For one-off questions — great for scripts and automation:
hermes chat -q "What time is it in Tokyo?"
This runs once and exits.
Health Check
If something isn’t working:
hermes doctor --fix
This checks dependencies, config, and API connectivity, and auto-resolves common issues.
FAQ
Q: Do I need a powerful computer to run Hermes? No — Hermes runs inference on remote providers. Your machine just needs network access and a terminal. Local models (Ollama, llama.cpp) are optional.
Q: Is Hermes free? Hermes itself is free and open source (MIT license). You pay for the LLM API calls through your chosen provider. OpenRouter offers free tier models; local models cost only electricity.
Q: Can I use Hermes without API keys? Yes — you can run local models via Ollama or llama.cpp. See the CLI Mastery guide for model configuration.
Next Steps
- CLI Mastery — learn commands, flags, session management, and slash commands
- Skills: Teaching Your Agent to Learn — create reusable procedures that make your agent smarter over time
- Multi-Platform Gateway — run Hermes on Telegram, Discord, Slack, and more
- Advanced Automation — cron jobs, MCP servers, and subagent delegation