This guide explains how to use the local LLM (Large Language Model) integration in mdiss for generating tickets using models running locally via Ollama.
pip install mdiss[ai] ollama
ollama pull mistral:7b
make llm-serve
from mdiss.ai.ticket_generator import AITicketGenerator
# Initialize with default model (mistral:7b)
generator = AITicketGenerator()
# Generate a ticket
ticket = generator.generate_ticket(
title="Fix login issues",
description="Users cannot log in on mobile devices"
)
print(ticket)
| Command | Description |
|---|---|
make llm-serve |
Start Ollama server if not running |
make llm-pull |
Download Mistral 7B model |
make llm-list |
List available Ollama models |
make llm-test |
Test LLM integration |
# Set Ollama base URL (default: http://localhost:11434)
export OLLAMA_BASE_URL="http://localhost:11434"
# Set default model (default: mistral:7b)
export DEFAULT_LLM_MODEL="mistral:7b"
# Set timeout for API calls in seconds (default: 300)
export OLLAMA_TIMEOUT=300
from mdiss.ai.ticket_generator import AITicketGenerator
# Initialize with custom model and parameters
generator = AITicketGenerator(
model="llama2",
temperature=0.7,
max_tokens=2000,
top_p=0.9,
timeout=300
)
You can use any model supported by Ollama. Some recommended models:
mistral:7b - Fast and capable general-purpose model (default)llama2 - Meta’s LLaMA 2 modelcodellama - Code-specific model based on LLaMAmixtral - High-quality Mixture of Experts modelList all available models:
make llm-list
make llm-servelsof -i :11434ollama pull <model_name>ollama listEnable debug logging:
import logging
logging.basicConfig(level=logging.DEBUG)
If you encounter any issues, please:
~/.ollama/logs/