clonebox

CloneBox πŸ“¦

CI PyPI version Python 3.8+ License

img.png

╔═══════════════════════════════════════════════════════╗
β•‘     ____  _                    ____                   β•‘
β•‘    / ___|| |  ___   _ __   ___|  _ \  ___ __  __      β•‘
β•‘   | |    | | / _ \ | '_ \ / _ \ |_) |/ _ \\ \/ /      β•‘
β•‘   | |___ | || (_) || | | |  __/  _ <| (_) |>  <       β•‘
β•‘    \____||_| \___/ |_| |_|\___|_| \_\\___//_/\_\      β•‘
β•‘                                                       β•‘
β•‘      Clone your workstation to an isolated VM         β•‘
β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

Clone your workstation environment to an isolated VM in 60 seconds using bind mounts instead of disk cloning.

CloneBox lets you create isolated virtual machines with only the applications, directories and services you need - using bind mounts instead of full disk cloning. Perfect for development, testing, or creating reproducible environments.

Features

GUI - cloned ubuntu

img_1.png

Use Cases

CloneBox excels in scenarios where developers need:

What’s New in v1.1

v1.1.2 is production-ready with two full runtimes and P2P secure sharing:

Feature Status
πŸ–₯️ VM Runtime (libvirt/QEMU) βœ… Stable
🐳 Container Runtime (Podman/Docker) βœ… Stable
πŸ“Š Web Dashboard (FastAPI + HTMX + Tailwind) βœ… Stable
πŸŽ›οΈ Profiles System (ml-dev, web-stack) βœ… Stable
πŸ” Auto-detection (services, apps, paths) βœ… Stable
πŸ”’ P2P Secure Transfer (AES-256) βœ… NEW
πŸ“Έ Snapshot Management βœ… NEW
πŸ₯ Health Check System βœ… NEW
πŸ§ͺ 95%+ Test Coverage βœ…

P2P Secure VM Sharing

Share VMs between workstations with AES-256 encryption:

# Generate team encryption key (once per team)
clonebox keygen
# πŸ”‘ Key saved: ~/.clonebox.key

# Export encrypted VM
clonebox export-encrypted my-dev-vm -o team-env.enc --user-data

# Transfer via SCP/SMB/USB
scp team-env.enc user@workstationB:~/

# Import on another machine (needs same key)
clonebox import-encrypted team-env.enc --name my-dev-copy

# Or use P2P commands directly
clonebox export-remote user@hostA my-vm -o local.enc --encrypted
clonebox import-remote local.enc user@hostB --encrypted
clonebox sync-key user@hostB  # Sync encryption key
clonebox list-remote user@hostB  # List remote VMs

Snapshot Management

Save and restore VM states:

# Create snapshot before risky operation
clonebox snapshot create my-vm --name "before-upgrade" --user

# List all snapshots
clonebox snapshot list my-vm --user

# Restore to previous state
clonebox snapshot restore my-vm --name "before-upgrade" --user

# Delete old snapshot
clonebox snapshot delete my-vm --name "before-upgrade" --user

Health Checks

Configure health probes in .clonebox.yaml:

health_checks:
  - name: nginx
    type: http
    url: http://localhost:80/health
    expected_status: 200
    
  - name: postgres
    type: tcp
    host: localhost
    port: 5432
    
  - name: redis
    type: command
    exec: "redis-cli ping"
    expected_output: "PONG"

Run health checks:

clonebox health my-vm --user

Roadmap

See TODO.md for detailed roadmap and CONTRIBUTING.md for contribution guidelines.

CloneBox to narzΔ™dzie CLI do szybkiego klonowania aktualnego Ε›rodowiska workstation do izolowanej maszyny wirtualnej (VM). Zamiast peΕ‚nego kopiowania dysku, uΕΌywa bind mounts (udostΔ™pnianie katalogΓ³w na ΕΌywo) i cloud-init do selektywnego przeniesienia tylko potrzebnych elementΓ³w: uruchomionych usΕ‚ug (Docker, PostgreSQL, nginx), aplikacji, Ε›cieΕΌek projektΓ³w i konfiguracji. Automatycznie pobiera obrazy Ubuntu, instaluje pakiety i uruchamia VM z SPICE GUI. Idealne dla deweloperΓ³w na Linuxie – VM powstaje w minuty, bez duplikowania danych.

Kluczowe komendy:

Dlaczego wirtualne klony workstation majΔ… sens?

Problem: Developerzy/Vibecoderzy nie izolujΔ… Ε›rodowisk dev/test (np. dla AI agentΓ³w), bo rΔ™czne odtwarzanie setupu to bΓ³l – godziny na instalacjΔ™ apps, usΕ‚ug, configΓ³w, dotfiles. Przechodzenie z fizycznego PC na VM wymagaΕ‚oby peΕ‚nego rebuilda, co blokuje workflow.

RozwiΔ…zanie CloneBox: Automatycznie skanuje i klonuje stan β€œtu i teraz” (usΕ‚ugi z ps, dockery z docker ps, projekty z git/.env). VM dziedziczy Ε›rodowisko bez kopiowania caΕ‚ego Ε›mietnika – tylko wybrane bind mounty.

KorzyΕ›ci w twoim kontekΕ›cie (embedded/distributed systems, AI automation):

PrzykΕ‚ad: Masz uruchomiony Kubernetes Podman z twoim home labem + projekt automotive leasing. clonebox clone ~/projects --run β†’ VM gotowa w 30s, z tymi samymi serwisami, ale izolowana. Lepsze niΕΌ Docker (brak GUI/full OS) czy peΕ‚na migracja.

Dlaczego ludzie tego nie robiΔ…? Brak automatyzacji – nikt nie chce rΔ™cznie rebuildowaΔ‡.

Installation

Run the setup script to automatically install dependencies and configure the environment:

# Clone the repository
git clone https://github.com/wronai/clonebox.git
cd clonebox

# Run the setup script
./setup.sh

The setup script will:

Manual Installation

Prerequisites

# Install libvirt and QEMU/KVM
sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virt-manager virt-viewer

# Enable and start libvirtd
sudo systemctl enable --now libvirtd

# Add user to libvirt group
sudo usermod -aG libvirt $USER
newgrp libvirt

# Install genisoimage for cloud-init
sudo apt install genisoimage

Install CloneBox

# From source
git clone https://github.com/wronai/clonebox.git
cd clonebox
pip install -e .

# Or directly
pip install clonebox

Dashboard ma opcjonalne zaleΕΌnoΕ›ci:

pip install "clonebox[dashboard]"

lub

# Aktywuj venv
source .venv/bin/activate

# Interaktywny tryb (wizard)
clonebox

# Lub poszczegΓ³lne komendy
clonebox detect              # PokaΕΌ wykryte usΕ‚ugi/apps/Ε›cieΕΌki
clonebox list                # Lista VM
clonebox create --config ... # UtwΓ³rz VM z JSON config
clonebox start <name>        # Uruchom VM
clonebox stop <name>         # Zatrzymaj VM
clonebox delete <name>       # UsuΕ„ VM

Development and Testing

Running Tests

CloneBox has comprehensive test coverage with unit tests and end-to-end tests:

# Run unit tests only (fast, no libvirt required)
make test

# Run fast unit tests (excludes slow tests)
make test-unit

# Run end-to-end tests (requires libvirt/KVM)
make test-e2e

# Run all tests including e2e
make test-all

# Run tests with coverage
make test-cov

# Run tests with verbose output
make test-verbose

Test Categories

Tests are organized with pytest markers:

E2E tests are automatically skipped when:

Manual Test Execution

# Run only unit tests (exclude e2e)
pytest tests/ -m "not e2e"

# Run only e2e tests
pytest tests/e2e/ -m "e2e" -v

# Run specific test file
pytest tests/test_cloner.py -v

# Run with coverage
pytest tests/ -m "not e2e" --cov=clonebox --cov-report=html

Quick Start

Simply run clonebox to start the interactive wizard:

clonebox

clonebox clone . --user --run --replace --base-image ~/ubuntu-22.04-cloud.qcow2 --disk-size-gb 60
# SprawdΕΊ diagnostykΔ™ na ΕΌywo
clonebox watch . --user

clonebox test . --user --validate --require-running-apps
# Uruchom peΕ‚nΔ… walidacjΔ™ (wykorzystuje QGA do sprawdzenia serwisΓ³w wewnΔ…trz)
clonebox test . --user --validate --smoke-test

Profiles (Reusable presets)

Profiles pozwalają trzymać gotowe presety dla VM/container (np. ml-dev, web-dev) i nakładać je na bazową konfigurację.

# PrzykΕ‚ad: uruchom kontener z profilem
clonebox container up . --profile ml-dev --engine podman

# PrzykΕ‚ad: generuj VM config z profilem
clonebox clone . --profile ml-dev --user --run

DomyΕ›lne lokalizacje profili:

Dashboard

clonebox dashboard --port 8080
# http://127.0.0.1:8080

The wizard will:

  1. Detect running services (Docker, PostgreSQL, nginx, etc.)
  2. Detect running applications and their working directories
  3. Detect project directories and config files
  4. Let you select what to include in the VM
  5. Create and optionally start the VM

Command Line

# Create VM with specific config
clonebox create --name my-dev-vm --config '{
  "paths": {
    "/home/user/projects": "/mnt/projects",
    "/home/user/.config": "/mnt/config"
  },
  "packages": ["python3", "nodejs", "docker.io"],
  "services": ["docker"]
}' --ram 4096 --vcpus 4 --disk-size-gb 20 --start

# Create VM with larger root disk
clonebox create --name my-dev-vm --disk-size-gb 30 --config '{"paths": {}, "packages": [], "services": []}'

# List VMs
clonebox list

# Start/Stop VM
clonebox start my-dev-vm
clonebox stop my-dev-vm

# Delete VM
clonebox delete my-dev-vm

# Detect system state (useful for scripting)
clonebox detect --json

Usage Examples

Basic Workflow

# 1. Clone current directory with auto-detection
clonebox clone . --user

# 2. Review generated config
cat .clonebox.yaml

# 3. Create and start VM
clonebox start . --user --viewer

# 4. Check VM status
clonebox status . --user

# 5. Open VM window later
clonebox open . --user

# 6. Stop VM when done
clonebox stop . --user

# 7. Delete VM if needed
clonebox delete . --user --yes

Development Environment with Browser Profiles

# Clone with app data (browser profiles, IDE settings)
clonebox clone . --user --run

# VM will have:
# - All your project directories
# - Browser profiles (Chrome, Firefox) with bookmarks and passwords
# - IDE settings (PyCharm, VSCode)
# - Docker containers and services

# Access in VM:
ls ~/.config/google-chrome  # Chrome profile

# Firefox profile (Ubuntu czΔ™sto uΕΌywa snap):
ls ~/snap/firefox/common/.mozilla/firefox
ls ~/.mozilla/firefox

# PyCharm profile (snap):
ls ~/snap/pycharm-community/common/.config/JetBrains
ls ~/.config/JetBrains

Container workflow (podman/docker)

# Start a dev container (auto-detect engine if not specified)
clonebox container up . --engine podman --detach

# List running containers
clonebox container ps

# Stop/remove
clonebox container stop <name>
clonebox container rm <name>

Full validation (VM)

clonebox test weryfikuje, ΕΌe VM faktycznie ma zamontowane Ε›cieΕΌki i speΕ‚nia wymagania z .clonebox.yaml.

clonebox test . --user --validate

Walidowane kategorie:

Testing and Validating VM Configuration

# Quick test - basic checks
clonebox test . --user --quick

# Full validation - checks EVERYTHING against YAML config
clonebox test . --user --validate

# Validation checks:
# βœ… All mount points (paths + app_data_paths) are mounted and accessible
# βœ… All APT packages are installed
# βœ… All snap packages are installed
# βœ… All services are enabled and running
# βœ… Reports file counts for each mount
# βœ… Shows package versions
# βœ… Comprehensive summary table

# Example output:
# πŸ’Ύ Validating Mount Points...
# β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”
# β”‚ Guest Path              β”‚ Mounted β”‚ Accessible β”‚ Files  β”‚
# β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€
# β”‚ /home/ubuntu/Downloads  β”‚ βœ…      β”‚ βœ…         β”‚ 199    β”‚
# β”‚ ~/.config/JetBrains     β”‚ βœ…      β”‚ βœ…         β”‚ 45     β”‚
# β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜
# 12/14 mounts working
#
# πŸ“¦ Validating APT Packages...
# β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
# β”‚ Package         β”‚ Status       β”‚ Version    β”‚
# β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
# β”‚ firefox         β”‚ βœ… Installed β”‚ 122.0+b... β”‚
# β”‚ docker.io       β”‚ βœ… Installed β”‚ 24.0.7-... β”‚
# β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
# 8/8 packages installed
#
# πŸ“Š Validation Summary
# β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”
# β”‚ Category       β”‚ Passed β”‚ Failed β”‚ Total β”‚
# β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€
# β”‚ Mounts         β”‚ 12     β”‚ 2      β”‚ 14    β”‚
# β”‚ APT Packages   β”‚ 8      β”‚ 0      β”‚ 8     β”‚
# β”‚ Snap Packages  β”‚ 2      β”‚ 0      β”‚ 2     β”‚
# β”‚ Services       β”‚ 5      β”‚ 1      β”‚ 6     β”‚
# β”‚ TOTAL          β”‚ 27     β”‚ 3      β”‚ 30    β”‚
# β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜

VM Health Monitoring and Mount Validation

# Check overall status including mount validation
clonebox status . --user

# Output shows:
# πŸ“Š VM State: running
# πŸ” Network and IP address
# ☁️ Cloud-init: Complete
# πŸ’Ύ Mount Points status table:
#    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”
#    β”‚ Guest Path              β”‚ Status       β”‚ Files  β”‚
#    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€
#    β”‚ /home/ubuntu/Downloads  β”‚ βœ… Mounted   β”‚ 199    β”‚
#    β”‚ /home/ubuntu/Documents  β”‚ ❌ Not mountedβ”‚ ?     β”‚
#    β”‚ ~/.config/JetBrains     β”‚ βœ… Mounted   β”‚ 45     β”‚
#    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜
#    12/14 mounts active
# πŸ₯ Health Check Status: OK

# Trigger full health check
clonebox status . --user --health

# If mounts are missing, remount or rebuild:
# In VM: sudo mount -a
# Or rebuild: clonebox clone . --user --run --replace

πŸ“Š Monitoring and Self-Healing

CloneBox includes continuous monitoring and automatic self-healing capabilities for both GUI applications and system services.

Monitor Running Applications and Services

# Watch real-time status of apps and services
clonebox watch . --user

# Output shows live dashboard:
# ╔══════════════════════════════════════════════════════════╗
# β•‘                   CloneBox Live Monitor                  β•‘
# ╠══════════════════════════════════════════════════════════╣
# β•‘ πŸ–₯️  GUI Apps:                                              β•‘
# β•‘   βœ… pycharm-community    PID: 1234   Memory: 512MB       β•‘
# β•‘   βœ… firefox             PID: 5678   Memory: 256MB       β•‘
# β•‘   ❌ chromium            Not running                    β•‘
# β•‘                                                          β•‘
# β•‘ πŸ”§ System Services:                                       β•‘
# β•‘   βœ… docker              Active: 2h 15m                β•‘
# β•‘   βœ… nginx               Active: 1h 30m                β•‘
# β•‘   βœ… uvicorn             Active: 45m (port 8000)       β•‘
# β•‘                                                          β•‘
# β•‘ πŸ“Š Last check: 2024-01-31 13:25:30                       β•‘
# β•‘ πŸ”„ Next check in: 25 seconds                             β•‘
# β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

# Check detailed status with logs
clonebox status . --user --verbose

# View monitor logs from host
./scripts/clonebox-logs.sh  # Interactive log viewer
# Or via SSH:
ssh ubuntu@<IP_VM> "tail -f /var/log/clonebox-monitor.log"

Repair and Troubleshooting

# Run automatic repair from host
clonebox repair . --user

# This triggers the repair script inside VM which:
# - Fixes directory permissions (pulse, ibus, dconf)
# - Restarts audio services (PulseAudio/PipeWire)
# - Reconnects snap interfaces
# - Remounts missing filesystems
# - Resets GNOME keyring if needed

# Interactive repair menu (via SSH)
ssh ubuntu@<IP_VM> "clonebox-repair"

# Manual repair options from host:
clonebox repair . --user --auto      # Full automatic repair
clonebox repair . --user --perms     # Fix permissions only
clonebox repair . --user --audio     # Fix audio only
clonebox repair . --user --snaps     # Reconnect snaps only
clonebox repair . --user --mounts    # Remount filesystems only

# Check repair status (via SSH)
ssh ubuntu@<IP_VM> "cat /var/run/clonebox-status"

# View repair logs
./scripts/clonebox-logs.sh  # Interactive viewer
# Or via SSH:
ssh ubuntu@<IP_VM> "tail -n 50 /var/log/clonebox-boot.log"

Monitor Configuration

The monitoring system is configured through environment variables in .env:

# Enable/disable monitoring
CLONEBOX_ENABLE_MONITORING=true
CLONEBOX_MONITOR_INTERVAL=30      # Check every 30 seconds
CLONEBOX_AUTO_REPAIR=true         # Auto-restart failed services
CLONEBOX_WATCH_APPS=true          # Monitor GUI apps
CLONEBOX_WATCH_SERVICES=true      # Monitor system services

Inside the VM - Manual Controls

# Check monitor service status
systemctl --user status clonebox-monitor

# View monitor logs
journalctl --user -u clonebox-monitor -f
tail -f /var/log/clonebox-monitor.log

# Stop/start monitoring
systemctl --user stop clonebox-monitor
systemctl --user start clonebox-monitor

# Check last status
cat /var/run/clonebox-monitor-status

# Run repair manually
clonebox-repair --all             # Run all fixes
clonebox-repair --status          # Show current status
clonebox-repair --logs            # Show recent logs

Export/Import Workflow

# On workstation A - Export VM with all data
clonebox export . --user --include-data -o my-dev-env.tar.gz

# Transfer file to workstation B, then import
clonebox import my-dev-env.tar.gz --user

# Start VM on new workstation
clonebox start . --user
clonebox open . --user

# VM includes:
# - Complete disk image
# - All browser profiles and settings
# - Project files
# - Docker images and containers

Troubleshooting Common Issues

# If mounts are empty after reboot:
clonebox status . --user  # Check VM status
# Then in VM:
sudo mount -a              # Remount all fstab entries

# If browser profiles don't sync:
rm .clonebox.yaml
clonebox clone . --user --run --replace

# If GUI doesn't open:
clonebox open . --user     # Easiest way
# or:
virt-viewer --connect qemu:///session clone-clonebox

# Check VM details:
clonebox list              # List all VMs
virsh --connect qemu:///session dominfo clone-clonebox

# Restart VM if needed:
clonebox restart . --user  # Easiest - stop and start
clonebox stop . --user && clonebox start . --user  # Manual restart
clonebox restart . --user --open  # Restart and open GUI
virsh --connect qemu:///session reboot clone-clonebox  # Direct reboot
virsh --connect qemu:///session reset clone-clonebox  # Hard reset if frozen

Legacy Examples (Manual Config)

These examples use the older create command with manual JSON config. For most users, the clone command with auto-detection is easier.

Python Development Environment

clonebox create --name python-dev --config '{
  "paths": {
    "/home/user/my-python-project": "/workspace",
    "/home/user/.pyenv": "/root/.pyenv"
  },
  "packages": ["python3", "python3-pip", "python3-venv", "build-essential"],
  "services": []
}' --ram 2048 --start

Docker Development

clonebox create --name docker-dev --config '{
  "paths": {
    "/home/user/docker-projects": "/projects",
    "/var/run/docker.sock": "/var/run/docker.sock"
  },
  "packages": ["docker.io", "docker-compose"],
  "services": ["docker"]
}' --ram 4096 --start

Full Stack (Node.js + PostgreSQL)

clonebox create --name fullstack --config '{
  "paths": {
    "/home/user/my-app": "/app",
    "/home/user/pgdata": "/var/lib/postgresql/data"
  },
  "packages": ["nodejs", "npm", "postgresql"],
  "services": ["postgresql"]
}' --ram 4096 --vcpus 4 --start

Inside the VM

After the VM boots, shared directories are automatically mounted via fstab entries. You can check their status:

# Check mount status
mount | grep 9p

# View health check report
cat /var/log/clonebox-health.log

# Re-run health check manually
clonebox-health

# Check cloud-init status
sudo cloud-init status

# Manual mount (if needed)
sudo mkdir -p /mnt/projects
sudo mount -t 9p -o trans=virtio,version=9p2000.L,nofail mount0 /mnt/projects

Health Check System

CloneBox includes automated health checks that verify:

Health check logs are saved to /var/log/clonebox-health.log with a summary in /var/log/clonebox-health-status.

Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                     HOST SYSTEM                        β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚  β”‚ /home/user/  β”‚  β”‚  /var/www/   β”‚  β”‚   Docker     β”‚  β”‚
β”‚  β”‚  projects/   β”‚  β”‚    html/     β”‚  β”‚   Socket     β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β”‚         β”‚                 β”‚                 β”‚          β”‚
β”‚         β”‚    9p/virtio    β”‚                 β”‚          β”‚
β”‚         β”‚   bind mounts   β”‚                 β”‚          β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚  β”‚               CloneBox VM                        β”‚  β”‚
β”‚  β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”‚  β”‚
β”‚  β”‚  β”‚ /mnt/proj  β”‚ β”‚ /mnt/www   β”‚ β”‚ /var/run/  β”‚    β”‚  β”‚
β”‚  β”‚  β”‚            β”‚ β”‚            β”‚ β”‚ docker.sockβ”‚    β”‚  β”‚
β”‚  β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚  β”‚
β”‚  β”‚                                                  β”‚  β”‚
β”‚  β”‚  cloud-init installed packages & services        β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The fastest way to clone your current working directory:

# Clone current directory - generates .clonebox.yaml and asks to create VM
# Base OS image is automatically downloaded to ~/Downloads on first run
clonebox clone .

# Increase VM disk size (recommended for GUI + large tooling)
clonebox clone . --user --disk-size-gb 30

# Clone specific path
clonebox clone ~/projects/my-app

# Clone with custom name and auto-start
clonebox clone ~/projects/my-app --name my-dev-vm --run

# Clone and edit config before creating
clonebox clone . --edit

# Replace existing VM (stops, deletes, and recreates)
clonebox clone . --replace

# Use custom base image instead of auto-download
clonebox clone . --base-image ~/ubuntu-22.04-cloud.qcow2

# User session mode (no root required)
clonebox clone . --user

Later, start the VM from any directory with .clonebox.yaml:

# Start VM from config in current directory
clonebox start .

# Start VM from specific path
clonebox start ~/projects/my-app

Export YAML Config

# Export detected state as YAML (with deduplication)
clonebox detect --yaml --dedupe

# Save to file
clonebox detect --yaml --dedupe -o my-config.yaml

Base Images

CloneBox automatically downloads a bootable Ubuntu cloud image on first run:

# Auto-download (default) - downloads Ubuntu 22.04 to ~/Downloads on first run
clonebox clone .

# Use custom base image
clonebox clone . --base-image ~/my-custom-image.qcow2

# Manual download (optional - clonebox does this automatically)
wget -O ~/Downloads/clonebox-ubuntu-jammy-amd64.qcow2 \
  https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img

Base image behavior:

VM Login Credentials

VM credentials are managed through .env file for security:

Setup:

  1. Copy .env.example to .env:
    cp .env.example .env
    
  2. Edit .env and set your password:
    # .env file
    VM_PASSWORD=your_secure_password
    VM_USERNAME=ubuntu
    
  3. The .clonebox.yaml file references the password from .env:
    vm:
      username: ubuntu
      password: ${VM_PASSWORD}  # Loaded from .env
    

Default credentials (if .env not configured):

Security notes:

User Session & Networking

CloneBox supports creating VMs in user session (no root required) with automatic network fallback:

# Create VM in user session (uses ~/.local/share/libvirt/images)
clonebox clone . --user

# Explicitly use user-mode networking (slirp) - works without libvirt network
clonebox clone . --user --network user

# Force libvirt default network (may fail in user session)
clonebox clone . --network default

# Auto mode (default): tries libvirt network, falls back to user-mode if unavailable
clonebox clone . --network auto

Network modes:

Commands Reference

Command Description
clonebox Interactive VM creation wizard
clonebox clone <path> Generate .clonebox.yaml from path + running processes
clonebox clone . --run Clone and immediately start VM
clonebox clone . --edit Clone, edit config, then create
clonebox clone . --replace Replace existing VM (stop, delete, recreate)
clonebox clone . --user Clone in user session (no root)
clonebox clone . --base-image <path> Use custom base image
clonebox clone . --disk-size-gb <gb> Set root disk size in GB (generated configs default to 20GB)
clonebox clone . --network user Use user-mode networking (slirp)
clonebox clone . --network auto Auto-detect network mode (default)
clonebox create --config <json> --disk-size-gb <gb> Create VM from JSON config with specified disk size
clonebox start . Start VM from .clonebox.yaml in current dir
clonebox start . --viewer Start VM and open GUI window
clonebox start <name> Start existing VM by name
clonebox stop . Stop VM from .clonebox.yaml in current dir
clonebox stop . -f Force stop VM
clonebox delete . Delete VM from .clonebox.yaml in current dir
clonebox delete . --yes Delete VM without confirmation
clonebox list List all VMs
clonebox detect Show detected services/apps/paths
clonebox detect --yaml Output as YAML config
clonebox detect --yaml --dedupe YAML with duplicates removed
clonebox detect --json Output as JSON
clonebox container up . Start a dev container for given path
clonebox container ps List containers
clonebox container stop <name> Stop a container
clonebox container rm <name> Remove a container
clonebox dashboard Run local dashboard (VM + containers)
clonebox status . --user Check VM health, cloud-init, IP, and mount status
clonebox status . --user --health Check VM status and run full health check
clonebox test . --user Test VM configuration (basic checks)
clonebox test . --user --validate Full validation: mounts, packages, services vs YAML
clonebox export . --user Export VM for migration to another workstation
clonebox export . --user --include-data Export VM with browser profiles and configs
clonebox import archive.tar.gz --user Import VM from export archive
clonebox open . --user Open GUI viewer for VM (same as virt-viewer)
virt-viewer --connect qemu:///session <vm> Open GUI for running VM
virsh --connect qemu:///session console <vm> Open text console (Ctrl+] to exit)

Requirements

Troubleshooting

Critical: Insufficient Disk Space

If you install a full desktop environment and large development tools (e.g. ubuntu-desktop-minimal, docker.io, large snaps like pycharm-community/chromium), you may hit low disk space warnings inside the VM.

Recommended fix:

vm:
  disk_size_gb: 30

You can also set it during config generation:

clonebox clone . --user --disk-size-gb 30

Notes:

Workaround for an existing VM (host-side resize + guest filesystem grow):

clonebox stop . --user
qemu-img resize ~/.local/share/libvirt/images/<vm-name>/root.qcow2 +10G
clonebox start . --user

Inside the VM:

sudo growpart /dev/vda 1
sudo resize2fs /dev/vda1
df -h /

Known Issue: IBus Preferences crash

During validation you may occasionally see a crash dialog from IBus Preferences in the Ubuntu desktop environment. This is an upstream issue related to the input method daemon (ibus-daemon) and obsolete system packages (e.g. libglib2.0, libssl3, libxml2, openssl). It does not affect CloneBox functionality and the VM operates normally.

Workaround:

Snap Apps Not Launching (PyCharm, Chromium, Firefox)

If snap-installed applications (e.g., PyCharm, Chromium) are installed but don’t launch when clicked, the issue is usually disconnected snap interfaces. This happens because snap interfaces are not auto-connected when installing via cloud-init.

New VMs created with updated CloneBox automatically connect snap interfaces, but for older VMs or manual installs:

# Check snap interface connections
snap connections pycharm-community

# If you see "-" instead of ":desktop", interfaces are NOT connected

# Connect required interfaces
sudo snap connect pycharm-community:desktop :desktop
sudo snap connect pycharm-community:desktop-legacy :desktop-legacy
sudo snap connect pycharm-community:x11 :x11
sudo snap connect pycharm-community:wayland :wayland
sudo snap connect pycharm-community:home :home
sudo snap connect pycharm-community:network :network

# Restart snap daemon and try again
sudo systemctl restart snapd
snap run pycharm-community

For Chromium/Firefox:

sudo snap connect chromium:desktop :desktop
sudo snap connect chromium:x11 :x11
sudo snap connect firefox:desktop :desktop
sudo snap connect firefox:x11 :x11

Debug launch:

PYCHARM_DEBUG=true snap run pycharm-community 2>&1 | tee /tmp/pycharm-debug.log

Nuclear option (reinstall):

snap remove pycharm-community
rm -rf ~/snap/pycharm-community
sudo snap install pycharm-community --classic
sudo snap connect pycharm-community:desktop :desktop

Network Issues

If you encounter β€œNetwork not found” or β€œnetwork β€˜default’ is not active” errors:

# Option 1: Use user-mode networking (no setup required)
clonebox clone . --user --network user

# Option 2: Run the network fix script
./fix-network.sh

# Or manually fix:
virsh --connect qemu:///session net-destroy default 2>/dev/null
virsh --connect qemu:///session net-undefine default 2>/dev/null
virsh --connect qemu:///session net-define /tmp/default-network.xml
virsh --connect qemu:///session net-start default

Permission Issues

If you get permission errors:

# Ensure user is in libvirt and kvm groups
sudo usermod -aG libvirt $USER
sudo usermod -aG kvm $USER

# Log out and log back in for groups to take effect

VM Already Exists

If you get β€œVM already exists” error:

# Option 1: Use --replace flag to automatically replace it
clonebox clone . --replace

# Option 2: Delete manually first
clonebox delete <vm-name>

# Option 3: Use virsh directly
virsh --connect qemu:///session destroy <vm-name>
virsh --connect qemu:///session undefine <vm-name>

# Option 4: Start the existing VM instead
clonebox start <vm-name>

virt-viewer not found

If GUI doesn’t open:

# Install virt-viewer
sudo apt install virt-viewer

# Then connect manually
virt-viewer --connect qemu:///session <vm-name>

Browser Profiles and PyCharm Not Working

If browser profiles or PyCharm configs aren’t available, or you get permission errors:

Root cause: VM was created with old version without proper mount permissions.

Solution - Rebuild VM with latest fixes:

# Stop and delete old VM
clonebox stop . --user
clonebox delete . --user --yes

# Recreate VM with fixed permissions and app data mounts
clonebox clone . --user --run --replace

After rebuild, verify mounts in VM:

# Check all mounts are accessible
ls ~/.config/google-chrome      # Chrome profile
ls ~/.mozilla/firefox           # Firefox profile  
ls ~/.config/JetBrains         # PyCharm settings
ls ~/Downloads                 # Downloads folder
ls ~/Documents                 # Documents folder

What changed in v0.1.12:

Mount Points Empty or Permission Denied

If you get β€œmust be superuser to use mount” error when accessing Downloads/Documents:

Solution: VM was created with old mount configuration. Recreate VM:

# Stop and delete old VM
clonebox stop . --user
clonebox delete . --user --yes

# Recreate with fixed permissions
clonebox clone . --user --run --replace

What was fixed:

Mount Points Empty After Reboot

If shared directories appear empty after VM restart:

  1. Check fstab entries:
    cat /etc/fstab | grep 9p
    
  2. Mount manually:
    sudo mount -a
    
  3. Verify access mode:
    • VMs created with accessmode="mapped" allow any user to access mounts
    • Mount options include uid=1000,gid=1000 for user access

Advanced Usage

VM Migration Between Workstations

Export your complete VM environment:

# Export VM with all data
clonebox export . --user --include-data -o my-dev-env.tar.gz

# Transfer to new workstation, then import
clonebox import my-dev-env.tar.gz --user
clonebox start . --user

Testing VM Configuration

Validate your VM setup:

# Quick test (basic checks)
clonebox test . --user --quick

# Full test (includes health checks)
clonebox test . --user --verbose

Monitoring VM Health

Check VM status from workstation:

# Check VM state, IP, cloud-init, and health
clonebox status . --user

# Trigger health check in VM
clonebox status . --user --health

Reopening VM Window

If you close the VM window, you can reopen it:

# Open GUI viewer (easiest)
clonebox open . --user

# Start VM and open GUI (if VM is stopped)
clonebox start . --user --viewer

# Open GUI for running VM
virt-viewer --connect qemu:///session clone-clonebox

# List VMs to get the correct name
clonebox list

# Text console (no GUI)
virsh --connect qemu:///session console clone-clonebox
# Press Ctrl + ] to exit console

Exporting to Proxmox

To use CloneBox VMs in Proxmox, you need to convert the qcow2 disk image to Proxmox format.

Step 1: Locate VM Disk Image

# Find VM disk location
clonebox list

# Check VM details for disk path
virsh --connect qemu:///session dominfo clone-clonebox

# Typical locations:
# User session: ~/.local/share/libvirt/images/<vm-name>/<vm-name>.qcow2
# System session: /var/lib/libvirt/images/<vm-name>/<vm-name>.qcow2

Step 2: Export VM with CloneBox

# Export VM with all data (from current directory with .clonebox.yaml)
clonebox export . --user --include-data -o clonebox-vm.tar.gz

# Or export specific VM by name
clonebox export safetytwin-vm --include-data -o safetytwin.tar.gz

# Extract to get the disk image
tar -xzf clonebox-vm.tar.gz
cd clonebox-clonebox
ls -la  # Should show disk.qcow2, vm.xml, etc.

Step 3: Convert to Proxmox Format

# Install qemu-utils if not installed
sudo apt install qemu-utils

# Convert qcow2 to raw format (Proxmox preferred)
qemu-img convert -f qcow2 -O raw disk.qcow2 vm-disk.raw

# Or convert to qcow2 with compression for smaller size
qemu-img convert -f qcow2 -O qcow2 -c disk.qcow2 vm-disk-compressed.qcow2

Step 4: Transfer to Proxmox Host

# Using scp (replace with your Proxmox host IP)
scp vm-disk.raw root@proxmox:/var/lib/vz/template/iso/

# Or using rsync for large files
rsync -avh --progress vm-disk.raw root@proxmox:/var/lib/vz/template/iso/

Step 5: Create VM in Proxmox

  1. Log into Proxmox Web UI

  2. Create new VM:
    • Click β€œCreate VM”
    • Enter VM ID and Name
    • Set OS: β€œDo not use any media”
  3. Configure Hardware:
    • Hard Disk:
      • Delete default disk
      • Click β€œAdd” β†’ β€œHard Disk”
      • Select your uploaded image file
      • Set Disk size (can be larger than image)
      • Set Bus: β€œVirtIO SCSI”
      • Set Cache: β€œWrite back” for better performance
  4. CPU & Memory:
    • Set CPU cores (match original VM config)
    • Set Memory (match original VM config)
  5. Network:
    • Set Model: β€œVirtIO (paravirtualized)”
  6. Confirm: Click β€œFinish” to create VM

Step 6: Post-Import Configuration

  1. Start the VM in Proxmox

  2. Update network configuration:
    # In VM console, update network interfaces
    sudo nano /etc/netplan/01-netcfg.yaml
       
    # Example for Proxmox bridge:
    network:
      version: 2
      renderer: networkd
      ethernets:
        ens18:  # Proxmox typically uses ens18
          dhcp4: true
    
  3. Apply network changes:
    sudo netplan apply
    
  4. Update mount points (if needed):
    # Mount points will fail in Proxmox, remove them
    sudo nano /etc/fstab
    # Comment out or remove 9p mount entries
       
    # Reboot to apply changes
    sudo reboot
    

Alternative: Direct Import to Proxmox Storage

If you have Proxmox with shared storage:

# On Proxmox host
# Create a temporary directory
mkdir /tmp/import

# Copy disk directly to Proxmox storage (example for local-lvm)
scp vm-disk.raw root@proxmox:/tmp/import/

# On Proxmox host, create VM using CLI
qm create 9000 --name clonebox-vm --memory 4096 --cores 4 --net0 virtio,bridge=vmbr0

# Import disk to VM
qm importdisk 9000 /tmp/import/vm-disk.raw local-lvm

# Attach disk to VM
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0

# Set boot disk
qm set 9000 --boot c --bootdisk scsi0

Troubleshooting

Notes

License

Apache License - see LICENSE file.