<message>Enhance the README files to provide clearer instructions for deploying the CM4 eMMC provisioning service on Proxmox, including detailed prerequisites and deployment steps. Update the .gitignore to exclude deploy logs generated during the deployment process, ensuring a cleaner repository. Additionally, refine archived documentation for better historical context and clarity on the active provisioning workflow.
20 KiB
Deploying the CM4 eMMC Provisioning Stack to Proxmox
Complete step-by-step guide for deploying the provisioning service (Proxmox host + LXC container) on a new or existing Proxmox server. Covers host preparation, network bridge configuration, LXC deployment, post-deploy setup, network boot, and first-boot asset sync.
For reference details (troubleshooting, redeploy, architecture), see PROXMOX-LXC-DEPLOYMENT.md.
Overview
The provisioning stack consists of two parts that work together:
| Component | Where it runs | What it does |
|---|---|---|
| Host scripts + udev | Proxmox host | Detects CM4 over USB, runs rpiboot, then dd to write/read the eMMC |
LXC container (cm4-provisioning) |
Proxmox LXC | Runs the Flask dashboard on port 5000; serves portal files and golden images |
The host and LXC share /var/lib/cm4-provisioning/ via a bind-mount, so images and status files are visible from both.
Workstation (this repo)
│
│ deploy-to-proxmox.sh (SSH + rsync)
▼
Proxmox Host
├── udev → cm4-flash-trigger.sh → flash-emmc-on-connect.sh
├── /opt/cm4-provisioning/ (scripts, env)
├── /var/lib/cm4-provisioning/ (golden.img, backups, status.json) ◄──────┐
└── LXC: cm4-provisioning │ bind-mount
├── Flask dashboard :5000 │
├── /opt/cm4-provisioning/dashboard/ │
└── /var/lib/cm4-provisioning/ ◄────────────────────────────────────┘
Part 1 — Proxmox Host Prerequisites
1.1 Hardware & OS
- Proxmox VE 7 or 8 installed on a physical machine.
- At least one USB port accessible to the host (not passed through to a VM) for the reTerminal USB slave cable.
- At least one active storage (local or local-lvm). Check:
pvesm status. - Internet access on the host (needed for initial usbboot/PiShrink install and LXC template download).
1.2 SSH key access from your workstation
The deploy script connects as root using key-based auth. Set this up if not already done:
# On your workstation — copy your public key to the Proxmox host
ssh-copy-id root@YOUR_PROXMOX_HOST
# Verify
ssh root@YOUR_PROXMOX_HOST "echo OK"
1.3 Proxmox network bridges
The LXC needs at minimum one bridge for WAN access. For provisioning LAN (DHCP to devices), it needs a second bridge.
WAN bridge (required)
The default WAN bridge is vmbr0, which is created automatically by Proxmox during installation and connects to your primary network. No extra configuration needed.
LAN bridge for provisioning (required for network boot / device DHCP)
If you want the LXC to serve DHCP, TFTP, and DNS on a dedicated provisioning LAN (so devices can network-boot), create a second Linux bridge on the Proxmox host:
- Open Proxmox Web UI → Node → Network → Create → Linux Bridge.
- Set:
- Name:
vmbr1(or any unused bridge name) - Bridge ports: the physical NIC connected to your provisioning LAN switch (e.g.
enp2s0). Leave blank for an internal-only bridge (useful for testing with no physical switch). - IPv4/CIDR: leave blank (the LXC handles the IP on this bridge, not the host).
- Autostart: checked.
- Name:
- Click Create, then Apply Configuration.
Note: If you connect the reTerminals via a switch that is also connected to
enp2s0, traffic flows directly. If there is no physical NIC to dedicate, leave bridge ports blank and connect all provisioning devices as VMs/LXCs on the same internal bridge.
Part 2 — Running the Deploy Script
From your workstation (where this repo is cloned), run a single command to deploy everything:
cd /path/to/reTerminal\ DM4
./emmc-provisioning/scripts/deploy-to-proxmox.sh root@YOUR_PROXMOX_HOST
Replace YOUR_PROXMOX_HOST with the Proxmox hostname or IP address.
2.1 Deploy script environment variables
Set these before running the script to customise the deployment:
| Variable | Default | Description |
|---|---|---|
DEPLOY_ROOTFS_STORAGE |
(interactive) | LXC rootfs storage name (e.g. local-lvm). If not set, script lists storages and asks. |
DEPLOY_LXC_WAN_BRIDGE |
vmbr0 |
Proxmox bridge for WAN (eth0 in LXC). |
DEPLOY_LXC_WAN_IP |
dhcp |
WAN address: dhcp or a static IP like 192.168.1.10/24. |
DEPLOY_LXC_LAN_BRIDGE |
(none) | If set, adds eth1 as provisioning LAN on this bridge (e.g. vmbr1). |
DEPLOY_LXC_LAN_SUBNET |
10.20.50.1/24 |
LXC IP/prefix on the LAN bridge. Used only when DEPLOY_LXC_LAN_BRIDGE is set. |
DEPLOY_LXC_ROOT_PASSWORD |
(default) | Sets LXC root password and enables SSH inside the container. |
DEPLOY_LXC_SSH_KEY |
~/.ssh/id_ed25519.pub |
Public key to add to LXC root's authorized_keys. Defaults to your workstation key. |
CM4_BACKUPS_HOST_PATH |
(none) | Host directory for backup images (e.g. /mnt/storage/cm4-backups). Bind-mounted into LXC. |
DEPLOY_EMMC_SIZE_GB |
32 |
eMMC size hint in GB (used only when multiple block devices appear after rpiboot). |
DEPLOY_LOG |
(off) | Set to 1 to write a timestamped log file in scripts/. |
2.2 Full deploy example
DEPLOY_ROOTFS_STORAGE=local-lvm \
DEPLOY_LXC_ROOT_PASSWORD='YourSecurePassword' \
DEPLOY_LXC_LAN_BRIDGE=vmbr1 \
DEPLOY_LXC_LAN_SUBNET=10.20.50.1/24 \
./emmc-provisioning/scripts/deploy-to-proxmox.sh root@10.20.30.40
2.3 What the deploy script does
The script runs five stages:
- Check — SSHes to the host, finds existing
cm4-provisioningcontainer by hostname (or lists storage for new container creation). - Clean + Rsync — Wipes
/tmp/emmc-provisioning-deployon the host and rsyncs the entire repo there (excluding.gitand deploy logs). - Remote install (host + LXC) — Runs a remote heredoc that:
- Creates the LXC (Debian 12, 1 GB RAM, 8 GB rootfs) if it doesn't exist, or reuses it by hostname.
- Adds eth1 (LAN bridge) if
DEPLOY_LXC_LAN_BRIDGEis set. - Configures the bind-mount for
/var/lib/cm4-provisioning/. - Installs host scripts to
/opt/cm4-provisioning/and udev rules to/etc/udev/rules.d/. - Installs and enables systemd units:
cm4-flash.service,cm4-build-cloudinit.path/.service,cm4-shrink.path/.service. - Writes
/opt/cm4-provisioning/env(golden image path, rpiboot dir, eMMC size). - Installs
python3-flaskandopenssh-serverin the LXC (skipped if already present). - Deploys the Flask dashboard and enables/restarts
cm4-dashboard.servicein the LXC. - Installs usbboot (
rpiboot) on the host if not already present. - Installs PiShrink on the host if not already present.
- LXC start — Starts the LXC if stopped.
- Summary — Prints LXC WAN IP (and LAN IP if set), dashboard URL, and remaining manual steps.
On redeploy (container already exists): host scripts, dashboard, env, systemd, and udev are always updated. LXC creation, bind-mounts, apt installs, usbboot, and PiShrink are skipped when already present.
Part 3 — Post-Deploy: Required Manual Steps
Step 1: Verify the dashboard is up
# Get the LXC IP from deploy output, or:
ssh root@YOUR_PROXMOX_HOST \
"CID=\$(pct list | awk '\$3==\"cm4-provisioning\"{print \$1}'); pct exec \$CID -- hostname -I"
# Open the dashboard
open http://<LXC-IP>:5000
The dashboard should show "Waiting for device in USB boot mode" on the home page.
Step 2: Verify host services
SSH to the Proxmox host and confirm the host side is healthy:
ssh root@YOUR_PROXMOX_HOST
# Check udev rule is installed
ls /etc/udev/rules.d/90-cm4-boot-mode.rules
# Check flash trigger script
ls /usr/local/bin/cm4-flash-trigger.sh
# Check host scripts
ls /opt/cm4-provisioning/
# Expected: flash-emmc-on-connect.sh, build-cloudinit-image.sh,
# run-shrink-on-host.sh, fix-gadget-bootcode-on-host.sh, env
# Check systemd path units are active
systemctl status cm4-build-cloudinit.path cm4-shrink.path
# Check auto-flash is enabled
ls /etc/cm4-provisioning/enabled
Step 3: Add a golden image
A golden image is required for Deploy (writing an image to the device's eMMC). Backup (reading from device) works without it.
Option A — Build via the dashboard:
- Open
http://<LXC-IP>:5000→ Admin tab. - Click Build cloud-init image: the host downloads the latest Raspberry Pi OS, injects your cloud-init
user-data, and createsgolden.img. - Click Set as golden once the build finishes.
Option B — Copy an existing image:
scp /path/to/your-golden.img root@YOUR_PROXMOX_HOST:/var/lib/cm4-provisioning/golden.img
Option C — Promote a backup: In the dashboard Admin → Images tab, select a backup and click Set as golden.
Step 4: Enable SSH into the LXC (optional)
If you ran the deploy with DEPLOY_LXC_ROOT_PASSWORD or a default SSH key, the LXC already has SSH enabled. Otherwise:
# From your workstation — adds your default SSH key and enables root SSH
./emmc-provisioning/scripts/setup-lxc-ssh.sh root@YOUR_PROXMOX_HOST
# Or with a specific key and password
ROOT_PASSWORD='YourPassword' \
./emmc-provisioning/scripts/setup-lxc-ssh.sh root@YOUR_PROXMOX_HOST ~/.ssh/id_ed25519.pub
Then connect:
ssh root@<LXC-IP>
Part 4 — Sync Portal Files (First-Boot Assets)
The LXC serves first-boot assets (kiosk scripts, desktop files, splash, theme, etc.) from /var/lib/cm4-provisioning/portal-files/. These must be synced from the repo.
# From your workstation
./emmc-provisioning/scripts/sync-portal-files-to-lxc.sh root@<LXC-IP>
This rsyncs everything under emmc-provisioning/cloud-init/fileserver/ to the LXC portal-files directory. Run this every time you update kiosk assets or first-boot scripts.
What gets synced:
| File | Purpose |
|---|---|
start-chromium.sh |
Wayland/labwc Chromium kiosk launcher |
five-tap-close-chromium.py |
5-tap touch overlay to close Chromium |
chromium-kiosk.desktop |
Autostart: launches Chromium kiosk |
chromium-kiosk-no-select/ |
Chromium extension: disables text selection |
set-rotation-at-login.sh/.desktop |
Per-login screen rotation |
01-set-rotation-once.sh/.desktop |
One-shot: rotation + dark theme + kanshi |
02-set-wallpaper-once.sh/.desktop |
One-shot: set wallpaper |
99-default-session.conf |
LightDM session = rpd-labwc |
custom.plymouth + custom.script |
Plymouth boot splash theme |
splash.png |
Boot splash / wallpaper image |
steps/01–13*.sh |
First-boot step scripts sourced by first-boot.sh |
Part 5 — Network Boot Setup (Optional)
Only needed if you want devices to boot over the network (PXE-style via TFTP) for provisioning, rather than via USB cable.
5.1 Prerequisites
- The LXC must have been deployed with a LAN bridge (
DEPLOY_LXC_LAN_BRIDGEset). The LXC's eth1 will be the provisioning LAN gateway. - Devices must be connected to the same LAN as the LXC's eth1.
5.2 Run the network boot setup script
From your workstation:
./emmc-provisioning/scripts/setup-network-boot-on-lxc.sh root@<LXC-IP>
This SSH-connects to the LXC and runs the full setup inside the container. It performs:
- Installs dnsmasq (DHCP + DNS server) and the
vlanpackage (for VLAN interfaces). - Configures dnsmasq on eth1:
- DHCP range:
<LAN_BASE>.100–<LAN_BASE>.200(e.g.10.20.50.100–10.20.50.200). - DNS: static record
file.server→ LAN gateway IP, so first-boot scripts can reachhttp://file.server/.... - DHCP option 6: sends LXC as DNS server to all DHCP clients.
- DHCP range:
- Configures extra IPs on eth1:
192.168.30.1/24,192.168.127.1/24(for serving multiple subnets). - Creates VLAN 40 interface
eth1.40at192.168.0.1/24(for VLAN-tagged networks on the provisioning LAN). - Enables IP forwarding (
net.ipv4.ip_forward=1) persisted in/etc/sysctl.d/. - Configures NAT (nftables or iptables fallback): masquerades all LAN traffic out eth0 so devices on the provisioning LAN get internet access.
- Enables and starts dnsmasq.
Config files written:
/etc/dnsmasq.d/network-boot.conf— DHCP + DNS on eth1/etc/nftables.d/nat-lan.conf— NAT rules/etc/network/interfaces.d/70-cm4-extra-lan— extra IPs and VLAN persisted
5.3 Enable PXE / TFTP network boot
TFTP is not enabled by default (dnsmasq is configured for DHCP + DNS only). To enable PXE/TFTP so devices can load a kernel and initramfs over the network:
# SSH into the LXC
ssh root@<LXC-IP>
# Enable PXE/TFTP (adds the PXE options to dnsmasq and restarts it)
/opt/cm4-provisioning/toggle-network-boot-dhcp.sh enable
This activates the PXE snippet at /etc/dnsmasq.d/network-boot-pxe.conf (DHCP options 66/67: next-server + boot file) and reloads dnsmasq.
To disable PXE again (keep DHCP/DNS only):
/opt/cm4-provisioning/toggle-network-boot-dhcp.sh disable
5.4 Populate the TFTP boot files
The TFTP root (/srv/tftpboot) needs Raspberry Pi 4 / CM4 boot files. From your workstation:
./emmc-provisioning/scripts/populate-tftpboot-from-git.sh root@<LXC-IP>
This downloads the official Raspberry Pi firmware boot/ folder from GitHub into /srv/tftpboot on the LXC.
To add the custom provisioning initramfs (Alpine-based, allows Backup/Deploy from network boot):
# Ensure the initramfs image is built (or use the pre-built one in the repo)
ls emmc-provisioning/network-boot-initramfs/initrd.img
# Copy it to the LXC TFTP root
scp emmc-provisioning/network-boot-initramfs/initrd.img root@<LXC-IP>:/srv/tftpboot/
# Then ensure config.txt references it
./emmc-provisioning/scripts/ensure-tftpboot-config-kernel-initrd.sh root@<LXC-IP>
5.5 Configure device EEPROM for network boot
For a reTerminal to boot from the network (when eMMC is empty or network boot order is set), its EEPROM BOOT_ORDER must include network boot. The recommended order is 0xf21 (eMMC first, then network):
# Check current EEPROM boot order on a connected device
./emmc-provisioning/scripts/check-network-boot-priority.sh root@<DEVICE-IP>
To set boot order via the provisioning dashboard (when device is in USB boot mode), use the Update EEPROM button in the dashboard.
5.6 Verify network boot is working
On the LXC, check the following:
# Is dnsmasq running?
systemctl status dnsmasq
# Is TFTP/PXE enabled?
/opt/cm4-provisioning/toggle-network-boot-dhcp.sh status
# Are TFTP boot files present?
ls /srv/tftpboot/start4cd.elf
# Any DHCP leases from devices?
cat /var/lib/misc/dnsmasq.leases
# Monitor live DHCP/TFTP traffic when powering on a device
tcpdump -i eth1 -n port 67 or port 68 or port 69
Part 6 — Installing usbboot Manually (if needed)
If the deploy script could not install usbboot (e.g. no internet during deploy), install it manually:
# From your workstation
scp emmc-provisioning/scripts/install-usbboot-on-host.sh root@YOUR_PROXMOX_HOST:/tmp/
ssh root@YOUR_PROXMOX_HOST "bash /tmp/install-usbboot-on-host.sh"
Or, if /tmp/emmc-provisioning-deploy is still on the host:
ssh root@YOUR_PROXMOX_HOST "bash /tmp/emmc-provisioning-deploy/scripts/install-usbboot-on-host.sh"
After install, verify:
ssh root@YOUR_PROXMOX_HOST "ls /opt/usbboot/rpiboot && ls /opt/usbboot/mass-storage-gadget64/"
Part 7 — Installing PiShrink Manually (if needed)
PiShrink enables the dashboard Shrink/Compress function (shrinks backup images before compressing). Install if the deploy failed:
ssh root@YOUR_PROXMOX_HOST "bash /tmp/emmc-provisioning-deploy/scripts/install-pishrink-on-host.sh"
# Or stream from workstation:
ssh root@YOUR_PROXMOX_HOST 'bash -s' < emmc-provisioning/scripts/install-pishrink-on-host.sh
Part 8 — Updating / Redeploying
To push code changes (scripts, dashboard, udev, systemd) to an existing deployment:
./emmc-provisioning/scripts/deploy-to-proxmox.sh root@YOUR_PROXMOX_HOST
The script finds the container by hostname (cm4-provisioning) and updates all files. It does not overwrite your golden.img or /etc/cm4-provisioning/enabled.
To update only the dashboard (faster when only app.py or templates changed):
./emmc-provisioning/scripts/deploy-dashboard-to-lxc.sh root@<LXC-IP>
To update only the portal files (kiosk assets, first-boot scripts):
./emmc-provisioning/scripts/sync-portal-files-to-lxc.sh root@<LXC-IP>
Deployment Checklist
| # | Action | Script / Command | Required? |
|---|---|---|---|
| Host prep | |||
| 1 | SSH key access to Proxmox host | ssh-copy-id root@HOST |
Yes |
| 2 | Create LAN bridge on Proxmox (vmbr1) |
Proxmox Web UI | For network boot |
| Deploy | |||
| 3 | Run deploy script | deploy-to-proxmox.sh root@HOST |
Yes |
| 4 | Verify dashboard is up | http://<LXC-IP>:5000 |
Yes |
| 5 | Verify host services and udev rule | ssh root@HOST "ls /etc/udev/rules.d/90-cm4-boot-mode.rules" |
Yes |
| Post-deploy | |||
| 6 | Add golden image for Deploy | Dashboard Admin or scp golden.img root@HOST:/var/lib/cm4-provisioning/ |
For Deploy |
| 7 | Sync portal files (kiosk/first-boot assets) | sync-portal-files-to-lxc.sh root@<LXC-IP> |
For first-boot provisioning |
| 8 | Enable SSH into LXC | setup-lxc-ssh.sh root@HOST |
Optional |
| Network boot | |||
| 9 | Run network boot setup on LXC | setup-network-boot-on-lxc.sh root@<LXC-IP> |
For network boot only |
| 10 | Enable PXE/TFTP | ssh root@<LXC-IP> /opt/cm4-provisioning/toggle-network-boot-dhcp.sh enable |
For PXE boot |
| 11 | Populate TFTP boot files | populate-tftpboot-from-git.sh root@<LXC-IP> |
For PXE boot |
| 12 | Copy provisioning initramfs to TFTP | scp network-boot-initramfs/initrd.img root@<LXC-IP>:/srv/tftpboot/ |
For provisioning via netboot |
| 13 | Configure device EEPROM boot order | Dashboard Update EEPROM or rpi-eeprom-config |
For network boot |
| If needed | |||
| 14 | Install usbboot manually | install-usbboot-on-host.sh |
If deploy had no internet |
| 15 | Install PiShrink manually | install-pishrink-on-host.sh |
If deploy had no internet |
After Deployment: Quick Reference
| What | How |
|---|---|
| Dashboard (WAN) | http://<LXC-IP>:5000 |
| Dashboard (LAN) | http://10.20.50.1:5000 (if LAN bridge was set) |
| SSH to LXC | ssh root@<LXC-IP> |
| Get LXC IP | ssh root@HOST "pct list; pct exec <CTID> -- hostname -I" |
| Golden image path | /var/lib/cm4-provisioning/golden.img (same on host and in LXC) |
| Disable auto-flash | ssh root@HOST "rm /etc/cm4-provisioning/enabled" |
| Re-enable auto-flash | ssh root@HOST "touch /etc/cm4-provisioning/enabled" |
| Flash log (on host) | ssh root@HOST "tail -f /var/lib/cm4-provisioning/flash.log" |
| Status JSON (on host) | ssh root@HOST "cat /var/lib/cm4-provisioning/status.json" |
| Full host snapshot | ssh root@HOST 'bash -s' < emmc-provisioning/scripts/monitor-from-host.sh |
| DHCP leases (LXC) | ssh root@<LXC-IP> "cat /var/lib/misc/dnsmasq.leases" |
Troubleshooting
For USB flash errors (rpiboot failures, block device not found, USB transfer errors) and LXC/dashboard issues, see the full troubleshooting section in PROXMOX-LXC-DEPLOYMENT.md.
For network boot issues (DHCP not working, device not appearing in dashboard), see NETWORK-BOOT-TROUBLESHOOTING.md.