goLLM supports multiple LLM providers for code generation and analysis. This document explains how to configure different LLM providers.
enabled
false
default_provider
"ollama"
"openai"
, "ollama"
, "anthropic"
model
"codellama:7b"
(for Ollama){
"llm": {
"enabled": true,
"default_provider": "openai",
"providers": {
"openai": {
"api_key": "your-openai-api-key",
"model": "gpt-4-turbo",
"temperature": 0.1,
"max_tokens": 4000,
"timeout": 120
}
}
}
}
{
"llm": {
"enabled": true,
"default_provider": "ollama",
"providers": {
"ollama": {
"base_url": "http://localhost:11434",
"model": "codellama:7b",
"timeout": 180,
"max_tokens": 4000,
"temperature": 0.1,
"api_type": "chat"
}
}
}
}
{
"llm": {
"enabled": true,
"default_provider": "anthropic",
"providers": {
"anthropic": {
"api_key": "your-anthropic-api-key",
"model": "claude-3-sonnet",
"temperature": 0.1,
"max_tokens": 4000,
"timeout": 120
}
}
}
}
You can also configure providers using environment variables:
# OpenAI
export OPENAI_API_KEY=your-api-key
# Ollama
export OLLAMA_BASE_URL=http://localhost:11434
# Anthropic
export ANTHROPIC_API_KEY=your-api-key
If multiple providers are configured, goLLM will use them in this order:
You can verify your LLM provider configuration with:
gollm health
This will check the availability of all configured providers and show their status.
{
"llm": {
"caching": {
"enabled": true,
"ttl": 3600,
"directory": ".gollm/cache"
}
}
}
{
"llm": {
"rate_limiting": {
"enabled": true,
"requests_per_minute": 60
}
}
}