NLP2CMD can now run as an HTTP API service, allowing you to integrate natural language command generation into your applications.
# Configure and save service settings to .env file
python -m nlp2cmd config-service --host 0.0.0.0 --port 8000
# Or with custom settings
python -m nlp2cmd config-service \
--host 127.0.0.1 \
--port 8080 \
--debug \
--log-level debug \
--cors-origins "http://localhost:3000,http://localhost:8080"
# Start with default settings (reads from .env)
python -m nlp2cmd service
# Start with custom settings
python -m nlp2cmd service \
--host 0.0.0.0 \
--port 8000 \
--workers 4 \
--reload # Enable auto-reload for development
# Run the test script
python test_service.py
# Or test manually
curl http://localhost:8000/health
curl -X POST http://localhost:8000/query \
-H "Content-Type: application/json" \
-d '{"query": "list files in current directory", "dsl": "shell"}'
/Service information and current configuration.
/healthHealth check endpoint.
/queryProcess natural language queries.
Request Body:
{
"query": "list files in current directory",
"dsl": "shell",
"explain": false,
"execute": false
}
Response:
{
"success": true,
"command": "ls -la",
"confidence": 0.95,
"domain": "shell",
"intent": "list_files",
"entities": {},
"explanation": "Generated by RuleBasedPipeline with confidence 0.95",
"execution_result": null
}
/configGet current service configuration.
/configUpdate service configuration.
/config/saveSave current configuration to .env file.
The service reads configuration from environment variables or .env file:
# Service settings
NLP2CMD_HOST=0.0.0.0
NLP2CMD_PORT=8000
NLP2CMD_DEBUG=false
NLP2CMD_LOG_LEVEL=info
NLP2CMD_CORS_ORIGINS=*
NLP2CMD_MAX_WORKERS=4
NLP2CMD_AUTO_EXECUTE=false
NLP2CMD_SESSION_TIMEOUT=3600
# LLM settings (inherited from main config)
LITELLM_MODEL=ollama/qwen2.5-coder:7b
LITELLM_API_BASE=http://localhost:11434
LITELLM_API_KEY=
LITELLM_TEMPERATURE=0.1
LITELLM_MAX_TOKENS=2048
LITELLM_TIMEOUT=30
python -m nlp2cmd service [OPTIONS]
Options:
--host TEXT Host to bind the service to
--port INTEGER Port to bind the service to
--debug Enable debug mode
--log-level [debug|info|warning|error|critical]
Log level
--cors-origins TEXT CORS origins (comma-separated)
--max-workers INTEGER Maximum number of workers
--auto-execute Auto-execute generated commands
--session-timeout INTEGER Session timeout in seconds
--save-env Save configuration to .env file
--env-file TEXT Path to .env file (default: .env)
--workers INTEGER Number of uvicorn workers
--reload Enable auto-reload for development
import requests
# Simple query
response = requests.post("http://localhost:8000/query", json={
"query": "find all python files",
"dsl": "shell"
})
result = response.json()
if result["success"]:
print(f"Command: {result['command']}")
print(f"Confidence: {result['confidence']}")
// Simple query
const response = await fetch('http://localhost:8000/query', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
query: 'list docker containers',
dsl: 'docker'
})
});
const result = await response.json();
if (result.success) {
console.log(`Command: ${result.command}`);
}
# Basic query
curl -X POST http://localhost:8000/query \
-H "Content-Type: application/json" \
-d '{"query": "show system information", "dsl": "shell"}'
# Query with explanation
curl -X POST http://localhost:8000/query \
-H "Content-Type: application/json" \
-d '{"query": "create a backup", "dsl": "shell", "explain": true}'
# Get service configuration
curl http://localhost:8000/config
# Update configuration
curl -X POST http://localhost:8000/config \
-H "Content-Type: application/json" \
-d '{"log_level": "debug"}'
python -m nlp2cmd service --reload --debug
# Run the test suite
python test_service.py
# Test with custom base URL
python test_service.py http://localhost:8080
Service mode requires additional dependencies:
pip install fastapi uvicorn[standard]
Or install all dependencies:
pip install -r requirements.txt
Auto-Execute: Be careful with --auto-execute or NLP2CMD_AUTO_EXECUTE=true as it will automatically run generated commands.
Network Binding: By default the service binds to 0.0.0.0 (all interfaces). For production, consider binding to specific interfaces.
CORS: Configure CORS origins appropriately for your use case.
Authentication: Consider adding authentication middleware for production deployments.
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["python", "-m", "nlp2cmd", "service", "--host", "0.0.0.0", "--port", "8000"]
version: '3.8'
services:
nlp2cmd:
build: .
ports:
- "8000:8000"
environment:
- NLP2CMD_HOST=0.0.0.0
- NLP2CMD_PORT=8000
- NLP2CMD_DEBUG=false
- NLP2CMD_LOG_LEVEL=info
volumes:
- ./.env:/app/.env
[Unit]
Description=NLP2CMD API Service
After=network.target
[Service]
Type=simple
User=nlp2cmd
WorkingDirectory=/opt/nlp2cmd
Environment=PATH=/opt/nlp2cmd/venv/bin
ExecStart=/opt/nlp2cmd/venv/bin/python -m nlp2cmd service
Restart=always
[Install]
WantedBy=multi-user.target
pip install fastapi uvicorn[standard]
netstat -tlnp | grep :8000
python -m nlp2cmd service --debug --log-level debug
--max-workers for concurrent processing.