nlp2cmd

NLP2CMD Service Mode

NLP2CMD can now run as an HTTP API service, allowing you to integrate natural language command generation into your applications.

Quick Start

1. Configure Service Settings

# Configure and save service settings to .env file
python -m nlp2cmd config-service --host 0.0.0.0 --port 8000

# Or with custom settings
python -m nlp2cmd config-service \
    --host 127.0.0.1 \
    --port 8080 \
    --debug \
    --log-level debug \
    --cors-origins "http://localhost:3000,http://localhost:8080"

2. Start the Service

# Start with default settings (reads from .env)
python -m nlp2cmd service

# Start with custom settings
python -m nlp2cmd service \
    --host 0.0.0.0 \
    --port 8000 \
    --workers 4 \
    --reload  # Enable auto-reload for development

3. Test the Service

# Run the test script
python test_service.py

# Or test manually
curl http://localhost:8000/health
curl -X POST http://localhost:8000/query \
  -H "Content-Type: application/json" \
  -d '{"query": "list files in current directory", "dsl": "shell"}'

API Endpoints

GET /

Service information and current configuration.

GET /health

Health check endpoint.

POST /query

Process natural language queries.

Request Body:

{
  "query": "list files in current directory",
  "dsl": "shell",
  "explain": false,
  "execute": false
}

Response:

{
  "success": true,
  "command": "ls -la",
  "confidence": 0.95,
  "domain": "shell",
  "intent": "list_files",
  "entities": {},
  "explanation": "Generated by RuleBasedPipeline with confidence 0.95",
  "execution_result": null
}

GET /config

Get current service configuration.

POST /config

Update service configuration.

POST /config/save

Save current configuration to .env file.

Configuration Options

Environment Variables

The service reads configuration from environment variables or .env file:

# Service settings
NLP2CMD_HOST=0.0.0.0
NLP2CMD_PORT=8000
NLP2CMD_DEBUG=false
NLP2CMD_LOG_LEVEL=info
NLP2CMD_CORS_ORIGINS=*
NLP2CMD_MAX_WORKERS=4
NLP2CMD_AUTO_EXECUTE=false
NLP2CMD_SESSION_TIMEOUT=3600

# LLM settings (inherited from main config)
LITELLM_MODEL=ollama/qwen2.5-coder:7b
LITELLM_API_BASE=http://localhost:11434
LITELLM_API_KEY=
LITELLM_TEMPERATURE=0.1
LITELLM_MAX_TOKENS=2048
LITELLM_TIMEOUT=30

Command Line Options

python -m nlp2cmd service [OPTIONS]

Options:
  --host TEXT                     Host to bind the service to
  --port INTEGER                  Port to bind the service to
  --debug                         Enable debug mode
  --log-level [debug|info|warning|error|critical]
                                  Log level
  --cors-origins TEXT             CORS origins (comma-separated)
  --max-workers INTEGER           Maximum number of workers
  --auto-execute                  Auto-execute generated commands
  --session-timeout INTEGER       Session timeout in seconds
  --save-env                      Save configuration to .env file
  --env-file TEXT                 Path to .env file (default: .env)
  --workers INTEGER               Number of uvicorn workers
  --reload                        Enable auto-reload for development

Usage Examples

Python Client

import requests

# Simple query
response = requests.post("http://localhost:8000/query", json={
    "query": "find all python files",
    "dsl": "shell"
})

result = response.json()
if result["success"]:
    print(f"Command: {result['command']}")
    print(f"Confidence: {result['confidence']}")

JavaScript Client

// Simple query
const response = await fetch('http://localhost:8000/query', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    query: 'list docker containers',
    dsl: 'docker'
  })
});

const result = await response.json();
if (result.success) {
  console.log(`Command: ${result.command}`);
}

cURL Examples

# Basic query
curl -X POST http://localhost:8000/query \
  -H "Content-Type: application/json" \
  -d '{"query": "show system information", "dsl": "shell"}'

# Query with explanation
curl -X POST http://localhost:8000/query \
  -H "Content-Type: application/json" \
  -d '{"query": "create a backup", "dsl": "shell", "explain": true}'

# Get service configuration
curl http://localhost:8000/config

# Update configuration
curl -X POST http://localhost:8000/config \
  -H "Content-Type: application/json" \
  -d '{"log_level": "debug"}'

Development

Running with Auto-Reload

python -m nlp2cmd service --reload --debug

Testing

# Run the test suite
python test_service.py

# Test with custom base URL
python test_service.py http://localhost:8080

Dependencies

Service mode requires additional dependencies:

pip install fastapi uvicorn[standard]

Or install all dependencies:

pip install -r requirements.txt

Security Considerations

  1. Auto-Execute: Be careful with --auto-execute or NLP2CMD_AUTO_EXECUTE=true as it will automatically run generated commands.

  2. Network Binding: By default the service binds to 0.0.0.0 (all interfaces). For production, consider binding to specific interfaces.

  3. CORS: Configure CORS origins appropriately for your use case.

  4. Authentication: Consider adding authentication middleware for production deployments.

Production Deployment

Docker

FROM python:3.11-slim

WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt

COPY . .
EXPOSE 8000

CMD ["python", "-m", "nlp2cmd", "service", "--host", "0.0.0.0", "--port", "8000"]

Docker Compose

version: '3.8'
services:
  nlp2cmd:
    build: .
    ports:
      - "8000:8000"
    environment:
      - NLP2CMD_HOST=0.0.0.0
      - NLP2CMD_PORT=8000
      - NLP2CMD_DEBUG=false
      - NLP2CMD_LOG_LEVEL=info
    volumes:
      - ./.env:/app/.env

Systemd Service

[Unit]
Description=NLP2CMD API Service
After=network.target

[Service]
Type=simple
User=nlp2cmd
WorkingDirectory=/opt/nlp2cmd
Environment=PATH=/opt/nlp2cmd/venv/bin
ExecStart=/opt/nlp2cmd/venv/bin/python -m nlp2cmd service
Restart=always

[Install]
WantedBy=multi-user.target

Troubleshooting

Service Won’t Start

  1. Check if required dependencies are installed:
    pip install fastapi uvicorn[standard]
    
  2. Check if port is already in use:
    netstat -tlnp | grep :8000
    
  3. Enable debug mode for more detailed logs:
    python -m nlp2cmd service --debug --log-level debug
    

Connection Refused

  1. Verify the service is running and listening on the correct port.
  2. Check firewall settings.
  3. Ensure the host is correctly configured (0.0.0.0 for external access, 127.0.0.1 for local only).

Slow Response Times

  1. Consider increasing --max-workers for concurrent processing.
  2. Check system resources (CPU, memory).
  3. Enable debug logging to identify bottlenecks.