
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β ____ _ ____ β
β / ___|| | ___ _ __ ___| _ \ ___ __ __ β
β | | | | / _ \ | '_ \ / _ \ |_) |/ _ \\ \/ / β
β | |___ | || (_) || | | | __/ _ <| (_) |> < β
β \____||_| \___/ |_| |_|\___|_| \_\\___//_/\_\ β
β β
β Clone your workstation to an isolated VM β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Clone your workstation environment to an isolated VM in 60 seconds using bind mounts instead of disk cloning.
CloneBox lets you create isolated virtual machines with only the applications, directories and services you need - using bind mounts instead of full disk cloning. Perfect for development, testing, or creating reproducible environments.

CloneBox excels in scenarios where developers need:
v1.1.2 is production-ready with two full runtimes and P2P secure sharing:
| Feature | Status |
|---|---|
| π₯οΈ VM Runtime (libvirt/QEMU) | β Stable |
| π³ Container Runtime (Podman/Docker) | β Stable |
| π Web Dashboard (FastAPI + HTMX + Tailwind) | β Stable |
ποΈ Profiles System (ml-dev, web-stack) |
β Stable |
| π Auto-detection (services, apps, paths) | β Stable |
| π P2P Secure Transfer (AES-256) | β NEW |
| πΈ Snapshot Management | β NEW |
| π₯ Health Check System | β NEW |
| π§ͺ 95%+ Test Coverage | β |
Share VMs between workstations with AES-256 encryption:
# Generate team encryption key (once per team)
clonebox keygen
# π Key saved: ~/.clonebox.key
# Export encrypted VM
clonebox export-encrypted my-dev-vm -o team-env.enc --user-data
# Transfer via SCP/SMB/USB
scp team-env.enc user@workstationB:~/
# Import on another machine (needs same key)
clonebox import-encrypted team-env.enc --name my-dev-copy
# Or use P2P commands directly
clonebox export-remote user@hostA my-vm -o local.enc --encrypted
clonebox import-remote local.enc user@hostB --encrypted
clonebox sync-key user@hostB # Sync encryption key
clonebox list-remote user@hostB # List remote VMs
Save and restore VM states:
# Create snapshot before risky operation
clonebox snapshot create my-vm --name "before-upgrade" --user
# List all snapshots
clonebox snapshot list my-vm --user
# Restore to previous state
clonebox snapshot restore my-vm --name "before-upgrade" --user
# Delete old snapshot
clonebox snapshot delete my-vm --name "before-upgrade" --user
Configure health probes in .clonebox.yaml:
health_checks:
- name: nginx
type: http
url: http://localhost:80/health
expected_status: 200
- name: postgres
type: tcp
host: localhost
port: 5432
- name: redis
type: command
exec: "redis-cli ping"
expected_output: "PONG"
Run health checks:
clonebox health my-vm --user
See TODO.md for detailed roadmap and CONTRIBUTING.md for contribution guidelines.
CloneBox to narzΔdzie CLI do szybkiego klonowania aktualnego Εrodowiska workstation do izolowanej maszyny wirtualnej (VM). Zamiast peΕnego kopiowania dysku, uΕΌywa bind mounts (udostΔpnianie katalogΓ³w na ΕΌywo) i cloud-init do selektywnego przeniesienia tylko potrzebnych elementΓ³w: uruchomionych usΕug (Docker, PostgreSQL, nginx), aplikacji, ΕcieΕΌek projektΓ³w i konfiguracji. Automatycznie pobiera obrazy Ubuntu, instaluje pakiety i uruchamia VM z SPICE GUI. Idealne dla deweloperΓ³w na Linuxie β VM powstaje w minuty, bez duplikowania danych.
Kluczowe komendy:
clonebox β interaktywny wizard (detect + create + start)clonebox detect β skanuje usΕugi/apps/ΕcieΕΌkiclonebox clone . --user --run β szybki klon bieΕΌΔ
cego katalogu z uΕΌytkownikiem i autostartemclonebox watch . --user β monitoruj na ΕΌywo aplikacje i usΕugi w VMclonebox repair . --user β napraw problemy z uprawnieniami, audio, usΕugamiclonebox container up|ps|stop|rm β lekki runtime kontenerowy (podman/docker)clonebox dashboard β lokalny dashboard (VM + containers)Problem: Developerzy/Vibecoderzy nie izolujΔ Εrodowisk dev/test (np. dla AI agentΓ³w), bo rΔczne odtwarzanie setupu to bΓ³l β godziny na instalacjΔ apps, usΕug, configΓ³w, dotfiles. Przechodzenie z fizycznego PC na VM wymagaΕoby peΕnego rebuilda, co blokuje workflow.
RozwiΔ
zanie CloneBox: Automatycznie skanuje i klonuje stan βtu i terazβ (usΕugi z ps, dockery z docker ps, projekty z git/.env). VM dziedziczy Εrodowisko bez kopiowania caΕego Εmietnika β tylko wybrane bind mounty.
KorzyΕci w twoim kontekΕcie (embedded/distributed systems, AI automation):
PrzykΕad: Masz uruchomiony Kubernetes Podman z twoim home labem + projekt automotive leasing. clonebox clone ~/projects --run β VM gotowa w 30s, z tymi samymi serwisami, ale izolowana. Lepsze niΕΌ Docker (brak GUI/full OS) czy peΕna migracja.
Dlaczego ludzie tego nie robiΔ ? Brak automatyzacji β nikt nie chce rΔcznie rebuildowaΔ.
Run the setup script to automatically install dependencies and configure the environment:
# Clone the repository
git clone https://github.com/wronai/clonebox.git
cd clonebox
# Run the setup script
./setup.sh
The setup script will:
# Install libvirt and QEMU/KVM
sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virt-manager virt-viewer
# Enable and start libvirtd
sudo systemctl enable --now libvirtd
# Add user to libvirt group
sudo usermod -aG libvirt $USER
newgrp libvirt
# Install genisoimage for cloud-init
sudo apt install genisoimage
# From source
git clone https://github.com/wronai/clonebox.git
cd clonebox
pip install -e .
# Or directly
pip install clonebox
Dashboard ma opcjonalne zaleΕΌnoΕci:
pip install "clonebox[dashboard]"
lub
# Aktywuj venv
source .venv/bin/activate
# Interaktywny tryb (wizard)
clonebox
# Lub poszczegΓ³lne komendy
clonebox detect # PokaΕΌ wykryte usΕugi/apps/ΕcieΕΌki
clonebox list # Lista VM
clonebox create --config ... # UtwΓ³rz VM z JSON config
clonebox start <name> # Uruchom VM
clonebox stop <name> # Zatrzymaj VM
clonebox delete <name> # UsuΕ VM
CloneBox has comprehensive test coverage with unit tests and end-to-end tests:
# Run unit tests only (fast, no libvirt required)
make test
# Run fast unit tests (excludes slow tests)
make test-unit
# Run end-to-end tests (requires libvirt/KVM)
make test-e2e
# Run all tests including e2e
make test-all
# Run tests with coverage
make test-cov
# Run tests with verbose output
make test-verbose
Tests are organized with pytest markers:
@pytest.mark.e2e)@pytest.mark.slow)E2E tests are automatically skipped when:
/dev/kvm is not availableCI=true or GITHUB_ACTIONS=true)# Run only unit tests (exclude e2e)
pytest tests/ -m "not e2e"
# Run only e2e tests
pytest tests/e2e/ -m "e2e" -v
# Run specific test file
pytest tests/test_cloner.py -v
# Run with coverage
pytest tests/ -m "not e2e" --cov=clonebox --cov-report=html
Simply run clonebox to start the interactive wizard:
clonebox
clonebox clone . --user --run --replace --base-image ~/ubuntu-22.04-cloud.qcow2 --disk-size-gb 60
# SprawdΕΊ diagnostykΔ na ΕΌywo
clonebox watch . --user
clonebox test . --user --validate --require-running-apps
# Uruchom peΕnΔ
walidacjΔ (wykorzystuje QGA do sprawdzenia serwisΓ³w wewnΔ
trz)
clonebox test . --user --validate --smoke-test
Profiles pozwalajΔ
trzymaΔ gotowe presety dla VM/container (np. ml-dev, web-dev) i nakΕadaΔ je na bazowΔ
konfiguracjΔ.
# PrzykΕad: uruchom kontener z profilem
clonebox container up . --profile ml-dev --engine podman
# PrzykΕad: generuj VM config z profilem
clonebox clone . --profile ml-dev --user --run
DomyΕlne lokalizacje profili:
~/.clonebox.d/<name>.yaml./.clonebox.d/<name>.yamlsrc/clonebox/templates/profiles/<name>.yamlclonebox dashboard --port 8080
# http://127.0.0.1:8080
The wizard will:
# Create VM with specific config
clonebox create --name my-dev-vm --config '{
"paths": {
"/home/user/projects": "/mnt/projects",
"/home/user/.config": "/mnt/config"
},
"packages": ["python3", "nodejs", "docker.io"],
"services": ["docker"]
}' --ram 4096 --vcpus 4 --disk-size-gb 20 --start
# Create VM with larger root disk
clonebox create --name my-dev-vm --disk-size-gb 30 --config '{"paths": {}, "packages": [], "services": []}'
# List VMs
clonebox list
# Start/Stop VM
clonebox start my-dev-vm
clonebox stop my-dev-vm
# Delete VM
clonebox delete my-dev-vm
# Detect system state (useful for scripting)
clonebox detect --json
# 1. Clone current directory with auto-detection
clonebox clone . --user
# 2. Review generated config
cat .clonebox.yaml
# 3. Create and start VM
clonebox start . --user --viewer
# 4. Check VM status
clonebox status . --user
# 5. Open VM window later
clonebox open . --user
# 6. Stop VM when done
clonebox stop . --user
# 7. Delete VM if needed
clonebox delete . --user --yes
# Clone with app data (browser profiles, IDE settings)
clonebox clone . --user --run
# VM will have:
# - All your project directories
# - Browser profiles (Chrome, Firefox) with bookmarks and passwords
# - IDE settings (PyCharm, VSCode)
# - Docker containers and services
# Access in VM:
ls ~/.config/google-chrome # Chrome profile
# Firefox profile (Ubuntu czΔsto uΕΌywa snap):
ls ~/snap/firefox/common/.mozilla/firefox
ls ~/.mozilla/firefox
# PyCharm profile (snap):
ls ~/snap/pycharm-community/common/.config/JetBrains
ls ~/.config/JetBrains
# Start a dev container (auto-detect engine if not specified)
clonebox container up . --engine podman --detach
# List running containers
clonebox container ps
# Stop/remove
clonebox container stop <name>
clonebox container rm <name>
clonebox test weryfikuje, ΕΌe VM faktycznie ma zamontowane ΕcieΕΌki i speΕnia wymagania z .clonebox.yaml.
clonebox test . --user --validate
Walidowane kategorie:
# Quick test - basic checks
clonebox test . --user --quick
# Full validation - checks EVERYTHING against YAML config
clonebox test . --user --validate
# Validation checks:
# β
All mount points (paths + app_data_paths) are mounted and accessible
# β
All APT packages are installed
# β
All snap packages are installed
# β
All services are enabled and running
# β
Reports file counts for each mount
# β
Shows package versions
# β
Comprehensive summary table
# Example output:
# πΎ Validating Mount Points...
# βββββββββββββββββββββββββββ¬ββββββββββ¬βββββββββββββ¬βββββββββ
# β Guest Path β Mounted β Accessible β Files β
# βββββββββββββββββββββββββββΌββββββββββΌβββββββββββββΌβββββββββ€
# β /home/ubuntu/Downloads β β
β β
β 199 β
# β ~/.config/JetBrains β β
β β
β 45 β
# βββββββββββββββββββββββββββ΄ββββββββββ΄βββββββββββββ΄βββββββββ
# 12/14 mounts working
#
# π¦ Validating APT Packages...
# βββββββββββββββββββ¬βββββββββββββββ¬βββββββββββββ
# β Package β Status β Version β
# βββββββββββββββββββΌβββββββββββββββΌβββββββββββββ€
# β firefox β β
Installed β 122.0+b... β
# β docker.io β β
Installed β 24.0.7-... β
# βββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββ
# 8/8 packages installed
#
# π Validation Summary
# ββββββββββββββββββ¬βββββββββ¬βββββββββ¬ββββββββ
# β Category β Passed β Failed β Total β
# ββββββββββββββββββΌβββββββββΌβββββββββΌββββββββ€
# β Mounts β 12 β 2 β 14 β
# β APT Packages β 8 β 0 β 8 β
# β Snap Packages β 2 β 0 β 2 β
# β Services β 5 β 1 β 6 β
# β TOTAL β 27 β 3 β 30 β
# ββββββββββββββββββ΄βββββββββ΄βββββββββ΄ββββββββ
# Check overall status including mount validation
clonebox status . --user
# Output shows:
# π VM State: running
# π Network and IP address
# βοΈ Cloud-init: Complete
# πΎ Mount Points status table:
# βββββββββββββββββββββββββββ¬βββββββββββββββ¬βββββββββ
# β Guest Path β Status β Files β
# βββββββββββββββββββββββββββΌβββββββββββββββΌβββββββββ€
# β /home/ubuntu/Downloads β β
Mounted β 199 β
# β /home/ubuntu/Documents β β Not mountedβ ? β
# β ~/.config/JetBrains β β
Mounted β 45 β
# βββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββ
# 12/14 mounts active
# π₯ Health Check Status: OK
# Trigger full health check
clonebox status . --user --health
# If mounts are missing, remount or rebuild:
# In VM: sudo mount -a
# Or rebuild: clonebox clone . --user --run --replace
CloneBox includes continuous monitoring and automatic self-healing capabilities for both GUI applications and system services.
# Watch real-time status of apps and services
clonebox watch . --user
# Output shows live dashboard:
# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# β CloneBox Live Monitor β
# β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ£
# β π₯οΈ GUI Apps: β
# β β
pycharm-community PID: 1234 Memory: 512MB β
# β β
firefox PID: 5678 Memory: 256MB β
# β β chromium Not running β
# β β
# β π§ System Services: β
# β β
docker Active: 2h 15m β
# β β
nginx Active: 1h 30m β
# β β
uvicorn Active: 45m (port 8000) β
# β β
# β π Last check: 2024-01-31 13:25:30 β
# β π Next check in: 25 seconds β
# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# Check detailed status with logs
clonebox status . --user --verbose
# View monitor logs from host
./scripts/clonebox-logs.sh # Interactive log viewer
# Or via SSH:
ssh ubuntu@<IP_VM> "tail -f /var/log/clonebox-monitor.log"
# Run automatic repair from host
clonebox repair . --user
# This triggers the repair script inside VM which:
# - Fixes directory permissions (pulse, ibus, dconf)
# - Restarts audio services (PulseAudio/PipeWire)
# - Reconnects snap interfaces
# - Remounts missing filesystems
# - Resets GNOME keyring if needed
# Interactive repair menu (via SSH)
ssh ubuntu@<IP_VM> "clonebox-repair"
# Manual repair options from host:
clonebox repair . --user --auto # Full automatic repair
clonebox repair . --user --perms # Fix permissions only
clonebox repair . --user --audio # Fix audio only
clonebox repair . --user --snaps # Reconnect snaps only
clonebox repair . --user --mounts # Remount filesystems only
# Check repair status (via SSH)
ssh ubuntu@<IP_VM> "cat /var/run/clonebox-status"
# View repair logs
./scripts/clonebox-logs.sh # Interactive viewer
# Or via SSH:
ssh ubuntu@<IP_VM> "tail -n 50 /var/log/clonebox-boot.log"
The monitoring system is configured through environment variables in .env:
# Enable/disable monitoring
CLONEBOX_ENABLE_MONITORING=true
CLONEBOX_MONITOR_INTERVAL=30 # Check every 30 seconds
CLONEBOX_AUTO_REPAIR=true # Auto-restart failed services
CLONEBOX_WATCH_APPS=true # Monitor GUI apps
CLONEBOX_WATCH_SERVICES=true # Monitor system services
# Check monitor service status
systemctl --user status clonebox-monitor
# View monitor logs
journalctl --user -u clonebox-monitor -f
tail -f /var/log/clonebox-monitor.log
# Stop/start monitoring
systemctl --user stop clonebox-monitor
systemctl --user start clonebox-monitor
# Check last status
cat /var/run/clonebox-monitor-status
# Run repair manually
clonebox-repair --all # Run all fixes
clonebox-repair --status # Show current status
clonebox-repair --logs # Show recent logs
# On workstation A - Export VM with all data
clonebox export . --user --include-data -o my-dev-env.tar.gz
# Transfer file to workstation B, then import
clonebox import my-dev-env.tar.gz --user
# Start VM on new workstation
clonebox start . --user
clonebox open . --user
# VM includes:
# - Complete disk image
# - All browser profiles and settings
# - Project files
# - Docker images and containers
# If mounts are empty after reboot:
clonebox status . --user # Check VM status
# Then in VM:
sudo mount -a # Remount all fstab entries
# If browser profiles don't sync:
rm .clonebox.yaml
clonebox clone . --user --run --replace
# If GUI doesn't open:
clonebox open . --user # Easiest way
# or:
virt-viewer --connect qemu:///session clone-clonebox
# Check VM details:
clonebox list # List all VMs
virsh --connect qemu:///session dominfo clone-clonebox
# Restart VM if needed:
clonebox restart . --user # Easiest - stop and start
clonebox stop . --user && clonebox start . --user # Manual restart
clonebox restart . --user --open # Restart and open GUI
virsh --connect qemu:///session reboot clone-clonebox # Direct reboot
virsh --connect qemu:///session reset clone-clonebox # Hard reset if frozen
These examples use the older create command with manual JSON config. For most users, the clone command with auto-detection is easier.
clonebox create --name python-dev --config '{
"paths": {
"/home/user/my-python-project": "/workspace",
"/home/user/.pyenv": "/root/.pyenv"
},
"packages": ["python3", "python3-pip", "python3-venv", "build-essential"],
"services": []
}' --ram 2048 --start
clonebox create --name docker-dev --config '{
"paths": {
"/home/user/docker-projects": "/projects",
"/var/run/docker.sock": "/var/run/docker.sock"
},
"packages": ["docker.io", "docker-compose"],
"services": ["docker"]
}' --ram 4096 --start
clonebox create --name fullstack --config '{
"paths": {
"/home/user/my-app": "/app",
"/home/user/pgdata": "/var/lib/postgresql/data"
},
"packages": ["nodejs", "npm", "postgresql"],
"services": ["postgresql"]
}' --ram 4096 --vcpus 4 --start
After the VM boots, shared directories are automatically mounted via fstab entries. You can check their status:
# Check mount status
mount | grep 9p
# View health check report
cat /var/log/clonebox-health.log
# Re-run health check manually
clonebox-health
# Check cloud-init status
sudo cloud-init status
# Manual mount (if needed)
sudo mkdir -p /mnt/projects
sudo mount -t 9p -o trans=virtio,version=9p2000.L,nofail mount0 /mnt/projects
CloneBox includes automated health checks that verify:
Health check logs are saved to /var/log/clonebox-health.log with a summary in /var/log/clonebox-health-status.
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β HOST SYSTEM β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β /home/user/ β β /var/www/ β β Docker β β
β β projects/ β β html/ β β Socket β β
β ββββββββ¬ββββββββ ββββββββ¬ββββββββ ββββββββ¬ββββββββ β
β β β β β
β β 9p/virtio β β β
β β bind mounts β β β
β ββββββββΌββββββββββββββββββΌββββββββββββββββββΌββββββββ β
β β CloneBox VM β β
β β ββββββββββββββ ββββββββββββββ ββββββββββββββ β β
β β β /mnt/proj β β /mnt/www β β /var/run/ β β β
β β β β β β β docker.sockβ β β
β β ββββββββββββββ ββββββββββββββ ββββββββββββββ β β
β β β β
β β cloud-init installed packages & services β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββ β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
The fastest way to clone your current working directory:
# Clone current directory - generates .clonebox.yaml and asks to create VM
# Base OS image is automatically downloaded to ~/Downloads on first run
clonebox clone .
# Increase VM disk size (recommended for GUI + large tooling)
clonebox clone . --user --disk-size-gb 30
# Clone specific path
clonebox clone ~/projects/my-app
# Clone with custom name and auto-start
clonebox clone ~/projects/my-app --name my-dev-vm --run
# Clone and edit config before creating
clonebox clone . --edit
# Replace existing VM (stops, deletes, and recreates)
clonebox clone . --replace
# Use custom base image instead of auto-download
clonebox clone . --base-image ~/ubuntu-22.04-cloud.qcow2
# User session mode (no root required)
clonebox clone . --user
Later, start the VM from any directory with .clonebox.yaml:
# Start VM from config in current directory
clonebox start .
# Start VM from specific path
clonebox start ~/projects/my-app
# Export detected state as YAML (with deduplication)
clonebox detect --yaml --dedupe
# Save to file
clonebox detect --yaml --dedupe -o my-config.yaml
CloneBox automatically downloads a bootable Ubuntu cloud image on first run:
# Auto-download (default) - downloads Ubuntu 22.04 to ~/Downloads on first run
clonebox clone .
# Use custom base image
clonebox clone . --base-image ~/my-custom-image.qcow2
# Manual download (optional - clonebox does this automatically)
wget -O ~/Downloads/clonebox-ubuntu-jammy-amd64.qcow2 \
https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img
Base image behavior:
--base-image is specified, Ubuntu 22.04 cloud image is auto-downloaded~/Downloads/clonebox-ubuntu-jammy-amd64.qcow2VM credentials are managed through .env file for security:
Setup:
.env.example to .env:
cp .env.example .env
.env and set your password:
# .env file
VM_PASSWORD=your_secure_password
VM_USERNAME=ubuntu
.clonebox.yaml file references the password from .env:
vm:
username: ubuntu
password: ${VM_PASSWORD} # Loaded from .env
Default credentials (if .env not configured):
ubuntuubuntuSecurity notes:
.env is automatically gitignored (never committed).env (sensitive, not committed)passwdCloneBox supports creating VMs in user session (no root required) with automatic network fallback:
# Create VM in user session (uses ~/.local/share/libvirt/images)
clonebox clone . --user
# Explicitly use user-mode networking (slirp) - works without libvirt network
clonebox clone . --user --network user
# Force libvirt default network (may fail in user session)
clonebox clone . --network default
# Auto mode (default): tries libvirt network, falls back to user-mode if unavailable
clonebox clone . --network auto
Network modes:
auto (default): Uses libvirt default network if available, otherwise falls back to user-mode (slirp)default: Forces use of libvirt default networkuser: Uses user-mode networking (slirp) - no bridge setup required| Command | Description |
|---|---|
clonebox |
Interactive VM creation wizard |
clonebox clone <path> |
Generate .clonebox.yaml from path + running processes |
clonebox clone . --run |
Clone and immediately start VM |
clonebox clone . --edit |
Clone, edit config, then create |
clonebox clone . --replace |
Replace existing VM (stop, delete, recreate) |
clonebox clone . --user |
Clone in user session (no root) |
clonebox clone . --base-image <path> |
Use custom base image |
clonebox clone . --disk-size-gb <gb> |
Set root disk size in GB (generated configs default to 20GB) |
clonebox clone . --network user |
Use user-mode networking (slirp) |
clonebox clone . --network auto |
Auto-detect network mode (default) |
clonebox create --config <json> --disk-size-gb <gb> |
Create VM from JSON config with specified disk size |
clonebox start . |
Start VM from .clonebox.yaml in current dir |
clonebox start . --viewer |
Start VM and open GUI window |
clonebox start <name> |
Start existing VM by name |
clonebox stop . |
Stop VM from .clonebox.yaml in current dir |
clonebox stop . -f |
Force stop VM |
clonebox delete . |
Delete VM from .clonebox.yaml in current dir |
clonebox delete . --yes |
Delete VM without confirmation |
clonebox list |
List all VMs |
clonebox detect |
Show detected services/apps/paths |
clonebox detect --yaml |
Output as YAML config |
clonebox detect --yaml --dedupe |
YAML with duplicates removed |
clonebox detect --json |
Output as JSON |
clonebox container up . |
Start a dev container for given path |
clonebox container ps |
List containers |
clonebox container stop <name> |
Stop a container |
clonebox container rm <name> |
Remove a container |
clonebox dashboard |
Run local dashboard (VM + containers) |
clonebox status . --user |
Check VM health, cloud-init, IP, and mount status |
clonebox status . --user --health |
Check VM status and run full health check |
clonebox test . --user |
Test VM configuration (basic checks) |
clonebox test . --user --validate |
Full validation: mounts, packages, services vs YAML |
clonebox export . --user |
Export VM for migration to another workstation |
clonebox export . --user --include-data |
Export VM with browser profiles and configs |
clonebox import archive.tar.gz --user |
Import VM from export archive |
clonebox open . --user |
Open GUI viewer for VM (same as virt-viewer) |
virt-viewer --connect qemu:///session <vm> |
Open GUI for running VM |
virsh --connect qemu:///session console <vm> |
Open text console (Ctrl+] to exit) |
/dev/kvm)libvirt groupIf you install a full desktop environment and large development tools (e.g. ubuntu-desktop-minimal, docker.io, large snaps like pycharm-community/chromium), you may hit low disk space warnings inside the VM.
Recommended fix:
.clonebox.yaml:vm:
disk_size_gb: 30
You can also set it during config generation:
clonebox clone . --user --disk-size-gb 30
Notes:
clonebox clone default to disk_size_gb: 20.vm.disk_size_gb in .clonebox.yaml.Workaround for an existing VM (host-side resize + guest filesystem grow):
clonebox stop . --user
qemu-img resize ~/.local/share/libvirt/images/<vm-name>/root.qcow2 +10G
clonebox start . --user
Inside the VM:
sudo growpart /dev/vda 1
sudo resize2fs /dev/vda1
df -h /
During validation you may occasionally see a crash dialog from IBus Preferences in the Ubuntu desktop environment.
This is an upstream issue related to the input method daemon (ibus-daemon) and obsolete system packages (e.g. libglib2.0, libssl3, libxml2, openssl).
It does not affect CloneBox functionality and the VM operates normally.
Workaround:
sudo apt upgrade inside the VM to update system packagesIf snap-installed applications (e.g., PyCharm, Chromium) are installed but donβt launch when clicked, the issue is usually disconnected snap interfaces. This happens because snap interfaces are not auto-connected when installing via cloud-init.
New VMs created with updated CloneBox automatically connect snap interfaces, but for older VMs or manual installs:
# Check snap interface connections
snap connections pycharm-community
# If you see "-" instead of ":desktop", interfaces are NOT connected
# Connect required interfaces
sudo snap connect pycharm-community:desktop :desktop
sudo snap connect pycharm-community:desktop-legacy :desktop-legacy
sudo snap connect pycharm-community:x11 :x11
sudo snap connect pycharm-community:wayland :wayland
sudo snap connect pycharm-community:home :home
sudo snap connect pycharm-community:network :network
# Restart snap daemon and try again
sudo systemctl restart snapd
snap run pycharm-community
For Chromium/Firefox:
sudo snap connect chromium:desktop :desktop
sudo snap connect chromium:x11 :x11
sudo snap connect firefox:desktop :desktop
sudo snap connect firefox:x11 :x11
Debug launch:
PYCHARM_DEBUG=true snap run pycharm-community 2>&1 | tee /tmp/pycharm-debug.log
Nuclear option (reinstall):
snap remove pycharm-community
rm -rf ~/snap/pycharm-community
sudo snap install pycharm-community --classic
sudo snap connect pycharm-community:desktop :desktop
If you encounter βNetwork not foundβ or βnetwork βdefaultβ is not activeβ errors:
# Option 1: Use user-mode networking (no setup required)
clonebox clone . --user --network user
# Option 2: Run the network fix script
./fix-network.sh
# Or manually fix:
virsh --connect qemu:///session net-destroy default 2>/dev/null
virsh --connect qemu:///session net-undefine default 2>/dev/null
virsh --connect qemu:///session net-define /tmp/default-network.xml
virsh --connect qemu:///session net-start default
If you get permission errors:
# Ensure user is in libvirt and kvm groups
sudo usermod -aG libvirt $USER
sudo usermod -aG kvm $USER
# Log out and log back in for groups to take effect
If you get βVM already existsβ error:
# Option 1: Use --replace flag to automatically replace it
clonebox clone . --replace
# Option 2: Delete manually first
clonebox delete <vm-name>
# Option 3: Use virsh directly
virsh --connect qemu:///session destroy <vm-name>
virsh --connect qemu:///session undefine <vm-name>
# Option 4: Start the existing VM instead
clonebox start <vm-name>
If GUI doesnβt open:
# Install virt-viewer
sudo apt install virt-viewer
# Then connect manually
virt-viewer --connect qemu:///session <vm-name>
If browser profiles or PyCharm configs arenβt available, or you get permission errors:
Root cause: VM was created with old version without proper mount permissions.
Solution - Rebuild VM with latest fixes:
# Stop and delete old VM
clonebox stop . --user
clonebox delete . --user --yes
# Recreate VM with fixed permissions and app data mounts
clonebox clone . --user --run --replace
After rebuild, verify mounts in VM:
# Check all mounts are accessible
ls ~/.config/google-chrome # Chrome profile
ls ~/.mozilla/firefox # Firefox profile
ls ~/.config/JetBrains # PyCharm settings
ls ~/Downloads # Downloads folder
ls ~/Documents # Documents folder
What changed in v0.1.12:
uid=1000,gid=1000 for ubuntu user accesspaths and app_data_paths are properly mountedIf you get βmust be superuser to use mountβ error when accessing Downloads/Documents:
Solution: VM was created with old mount configuration. Recreate VM:
# Stop and delete old VM
clonebox stop . --user
clonebox delete . --user --yes
# Recreate with fixed permissions
clonebox clone . --user --run --replace
What was fixed:
uid=1000,gid=1000 so ubuntu user has accessIf shared directories appear empty after VM restart:
cat /etc/fstab | grep 9p
sudo mount -a
accessmode="mapped" allow any user to access mountsuid=1000,gid=1000 for user accessExport your complete VM environment:
# Export VM with all data
clonebox export . --user --include-data -o my-dev-env.tar.gz
# Transfer to new workstation, then import
clonebox import my-dev-env.tar.gz --user
clonebox start . --user
Validate your VM setup:
# Quick test (basic checks)
clonebox test . --user --quick
# Full test (includes health checks)
clonebox test . --user --verbose
Check VM status from workstation:
# Check VM state, IP, cloud-init, and health
clonebox status . --user
# Trigger health check in VM
clonebox status . --user --health
If you close the VM window, you can reopen it:
# Open GUI viewer (easiest)
clonebox open . --user
# Start VM and open GUI (if VM is stopped)
clonebox start . --user --viewer
# Open GUI for running VM
virt-viewer --connect qemu:///session clone-clonebox
# List VMs to get the correct name
clonebox list
# Text console (no GUI)
virsh --connect qemu:///session console clone-clonebox
# Press Ctrl + ] to exit console
To use CloneBox VMs in Proxmox, you need to convert the qcow2 disk image to Proxmox format.
# Find VM disk location
clonebox list
# Check VM details for disk path
virsh --connect qemu:///session dominfo clone-clonebox
# Typical locations:
# User session: ~/.local/share/libvirt/images/<vm-name>/<vm-name>.qcow2
# System session: /var/lib/libvirt/images/<vm-name>/<vm-name>.qcow2
# Export VM with all data (from current directory with .clonebox.yaml)
clonebox export . --user --include-data -o clonebox-vm.tar.gz
# Or export specific VM by name
clonebox export safetytwin-vm --include-data -o safetytwin.tar.gz
# Extract to get the disk image
tar -xzf clonebox-vm.tar.gz
cd clonebox-clonebox
ls -la # Should show disk.qcow2, vm.xml, etc.
# Install qemu-utils if not installed
sudo apt install qemu-utils
# Convert qcow2 to raw format (Proxmox preferred)
qemu-img convert -f qcow2 -O raw disk.qcow2 vm-disk.raw
# Or convert to qcow2 with compression for smaller size
qemu-img convert -f qcow2 -O qcow2 -c disk.qcow2 vm-disk-compressed.qcow2
# Using scp (replace with your Proxmox host IP)
scp vm-disk.raw root@proxmox:/var/lib/vz/template/iso/
# Or using rsync for large files
rsync -avh --progress vm-disk.raw root@proxmox:/var/lib/vz/template/iso/
Log into Proxmox Web UI
Start the VM in Proxmox
# In VM console, update network interfaces
sudo nano /etc/netplan/01-netcfg.yaml
# Example for Proxmox bridge:
network:
version: 2
renderer: networkd
ethernets:
ens18: # Proxmox typically uses ens18
dhcp4: true
sudo netplan apply
# Mount points will fail in Proxmox, remove them
sudo nano /etc/fstab
# Comment out or remove 9p mount entries
# Reboot to apply changes
sudo reboot
If you have Proxmox with shared storage:
# On Proxmox host
# Create a temporary directory
mkdir /tmp/import
# Copy disk directly to Proxmox storage (example for local-lvm)
scp vm-disk.raw root@proxmox:/tmp/import/
# On Proxmox host, create VM using CLI
qm create 9000 --name clonebox-vm --memory 4096 --cores 4 --net0 virtio,bridge=vmbr0
# Import disk to VM
qm importdisk 9000 /tmp/import/vm-disk.raw local-lvm
# Attach disk to VM
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
# Set boot disk
qm set 9000 --boot c --bootdisk scsi0
--include-data will be available in the VM diskApache License - see LICENSE file.