LLM Providers
Moltis supports 30+ LLM providers through a trait-based architecture. Configure providers through the web UI or directly in configuration files.
Supported Providers
Tier 1 (Full Support)
| Provider | Models | Tool Calling | Streaming |
|---|---|---|---|
| Anthropic | Claude 4, Claude 3.5, Claude 3 | ✅ | ✅ |
| OpenAI | GPT-4o, GPT-4, o1, o3 | ✅ | ✅ |
| Gemini 2.0, Gemini 1.5 | ✅ | ✅ | |
| GitHub Copilot | GPT-4o, Claude | ✅ | ✅ |
Tier 2 (Good Support)
| Provider | Models | Tool Calling | Streaming |
|---|---|---|---|
| Mistral | Mistral Large, Codestral | ✅ | ✅ |
| Groq | Llama 3, Mixtral | ✅ | ✅ |
| Together | Various open models | ✅ | ✅ |
| Fireworks | Various open models | ✅ | ✅ |
| DeepSeek | DeepSeek V3, Coder | ✅ | ✅ |
Tier 3 (Basic Support)
| Provider | Notes |
|---|---|
| OpenRouter | Aggregator for 100+ models |
| Ollama | Local models |
| Venice | Privacy-focused |
| Cerebras | Fast inference |
| SambaNova | Enterprise |
| Cohere | Command models |
| AI21 | Jamba models |
Configuration
Via Web UI (Recommended)
- Open Moltis in your browser
- Go to Settings → Providers
- Click on a provider card
- Enter your API key
- Select your preferred model
Via Configuration Files
Provider credentials are stored in ~/.config/moltis/provider_keys.json:
{
"anthropic": {
"apiKey": "sk-ant-...",
"model": "claude-sonnet-4-20250514"
},
"openai": {
"apiKey": "sk-...",
"model": "gpt-4o"
}
}
Enable providers in moltis.toml:
[providers]
default = "anthropic"
[providers.anthropic]
enabled = true
models = [
"claude-sonnet-4-20250514",
"claude-opus-4-20250514",
]
[providers.openai]
enabled = true
Provider-Specific Setup
Anthropic
- Get an API key from console.anthropic.com
- Enter it in Settings → Providers → Anthropic
OpenAI
- Get an API key from platform.openai.com
- Enter it in Settings → Providers → OpenAI
GitHub Copilot
GitHub Copilot uses OAuth authentication:
- Click Connect in Settings → Providers → GitHub Copilot
- Complete the GitHub OAuth flow
- Authorize Moltis to access Copilot
Google (Gemini)
- Get an API key from aistudio.google.com
- Enter it in Settings → Providers → Google
Ollama (Local Models)
Run models locally with Ollama:
- Install Ollama:
curl -fsSL https://ollama.ai/install.sh | sh - Pull a model:
ollama pull llama3.2 - Configure in Moltis:
{
"ollama": {
"baseUrl": "http://localhost:11434",
"model": "llama3.2"
}
}
OpenRouter
Access 100+ models through one API:
- Get an API key from openrouter.ai
- Enter it in Settings → Providers → OpenRouter
- Specify the model ID you want to use
{
"openrouter": {
"apiKey": "sk-or-...",
"model": "anthropic/claude-3.5-sonnet"
}
}
Custom Base URLs
For providers with custom endpoints (enterprise, proxies):
{
"openai": {
"apiKey": "sk-...",
"baseUrl": "https://your-proxy.example.com/v1",
"model": "gpt-4o"
}
}
Switching Providers
Per-Session
In the chat interface, use the model selector dropdown to switch providers/models for the current session.
Per-Message
Use the /model command to switch models mid-conversation:
/model claude-opus-4-20250514
Default Provider
Set the default in moltis.toml:
[providers]
default = "anthropic"
[agent]
model = "claude-sonnet-4-20250514"
Model Capabilities
Different models have different strengths:
| Use Case | Recommended Model |
|---|---|
| General coding | Claude Sonnet 4, GPT-4o |
| Complex reasoning | Claude Opus 4, o1 |
| Fast responses | Claude Haiku, GPT-4o-mini |
| Long context | Claude (200k), Gemini (1M+) |
| Local/private | Llama 3 via Ollama |
Troubleshooting
“Model not available”
The model may not be enabled for your account or region. Check:
- Your API key has access to the model
- The model ID is spelled correctly
- Your account has sufficient credits
“Rate limited”
You’ve exceeded the provider’s rate limits. Solutions:
- Wait and retry
- Use a different provider
- Upgrade your API plan
“Invalid API key”
- Verify the key is correct (no extra spaces)
- Check the key hasn’t expired
- Ensure the key has the required permissions