Quickstart
Get Moltis running in under 5 minutes.
1. Install
curl -fsSL https://www.moltis.org/install.sh | sh
Or via Homebrew:
brew install moltis-org/tap/moltis
2. Start
moltis
Youβll see output like:
π Moltis gateway starting...
π Open http://localhost:13131 in your browser
3. Configure a Provider
You need an LLM provider configured to chat. The fastest options:
Option A: API Key (Anthropic, OpenAI, Gemini, etc.)
- Set an API key as an environment variable and restart Moltis:
export ANTHROPIC_API_KEY="sk-ant-..." # Anthropic export OPENAI_API_KEY="sk-..." # OpenAI export GEMINI_API_KEY="..." # Google Gemini - Models appear automatically in the model picker.
Or configure via the web UI: Settings β Providers β enter your API key.
Option B: OAuth (Codex / Copilot)
- In Moltis, go to Settings β Providers
- Click OpenAI Codex or GitHub Copilot β Connect
- Complete the OAuth flow
Option C: Local LLM (Offline)
- In Moltis, go to Settings β Providers
- Click Local LLM
- Choose a model and save
See Providers for the full list of supported providers.
4. Chat!
Go to the Chat tab and start a conversation:
You: Write a Python function to check if a number is prime
Agent: Here's a Python function to check if a number is prime:
def is_prime(n):
if n < 2:
return False
for i in range(2, int(n ** 0.5) + 1):
if n % i == 0:
return False
return True
Whatβs Next?
Enable Tool Use
Moltis can execute code, browse the web, and more. Tools are enabled by default with sandbox protection.
Try:
You: Create a hello.py file that prints "Hello, World!" and run it
Connect Telegram
Chat with your agent from anywhere:
- Create a bot via @BotFather
- Copy the bot token
- In Moltis: Settings β Telegram β Enter token β Save
- Message your bot!
Connect Discord
- Create a bot in the Discord Developer Portal
- Enable Message Content Intent and copy the bot token
- In Moltis: Settings β Channels β Connect Discord β Enter token β Connect
- Invite the bot to your server and @mention it!
Add MCP Servers
Extend capabilities with MCP servers:
# In moltis.toml
[mcp]
[mcp.servers.github]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-github"]
env = { GITHUB_TOKEN = "ghp_..." }
Set Up Memory
Enable long-term memory for context across sessions:
# In moltis.toml
[memory]
provider = "openai"
model = "text-embedding-3-small"
Add knowledge by placing Markdown files in ~/.moltis/memory/.
Useful Commands
| Command | Description |
|---|---|
/new | Start a new session |
/model <name> | Switch models |
/clear | Clear chat history |
/help | Show available commands |
File Locations
| Path | Contents |
|---|---|
~/.config/moltis/moltis.toml | Configuration |
~/.config/moltis/provider_keys.json | API keys |
~/.moltis/ | Data (sessions, memory, logs) |
Getting Help
- Documentation: docs.moltis.org
- GitHub Issues: github.com/moltis-org/moltis/issues
- Discussions: github.com/moltis-org/moltis/discussions