13 KiB
CM4 eMMC provisioning on Proxmox (LXC + host)
The auto-flash runs on the Proxmox host (where the USB device appears). The LXC holds the same scripts and shares the golden image directory with the host so you can manage the image from the container.
What is deployed
| Where | What |
|---|---|
| Proxmox host | udev rule, trigger script, flash script, rpiboot (after you run the install script), /var/lib/cm4-provisioning/ (golden image dir), /etc/cm4-provisioning/enabled |
| LXC 201 (cm4-provisioning) | Same scripts in /opt/cm4-provisioning/, same env; /var/lib/cm4-provisioning/ is a bind mount from the host (shared storage for the golden image) |
When you plug the reTerminal in boot mode into the host, udev on the host runs the flash (rpiboot + dd). The golden image is read from /var/lib/cm4-provisioning/golden.img on the host (same path visible in the LXC).
Deployment that was done
-
LXC 201 created on Proxmox
10.130.60.224:- Hostname:
cm4-provisioning - Debian 12, 1 GB RAM, 8 GB rootfs
- Bind mount: host
/var/lib/cm4-provisioning→ container/var/lib/cm4-provisioning
- Hostname:
-
On the host:
/opt/cm4-provisioning/flash-emmc-on-connect.sh– flash script/usr/local/bin/cm4-flash-trigger.sh– started by udev/etc/udev/rules.d/90-cm4-boot-mode.rules– run trigger when USB vendor2b8eis added/opt/cm4-provisioning/env–GOLDEN_IMAGE,RPIBOOT_DIR,EMMC_SIZE_BYTES/etc/cm4-provisioning/enabled– safety switch (remove to disable auto-flash)
-
Inside LXC 201:
- Same scripts in
/opt/cm4-provisioning/and env (for reference/backup) - Golden image path:
/var/lib/cm4-provisioning/golden.img(bind-mounted from host) - Dashboard (optional): Flask app in
/opt/cm4-provisioning/dashboard/to monitor deployment and show connection steps; see below.
- Same scripts in
-
usbboot (rpiboot) was not built on the host (no outbound DNS during deploy). You must install it when the host has internet.
What you need to do
1. Build and install rpiboot on the Proxmox host (when it has internet)
On your machine (repo already synced to the host):
# From your repo
scp chromium-setup/emmc-provisioning/scripts/install-usbboot-on-host.sh root@10.130.60.224:/tmp/
ssh root@10.130.60.224 "bash /tmp/install-usbboot-on-host.sh"
Or on the host (if the deploy folder is still there):
ssh root@10.130.60.224
bash /tmp/emmc-provisioning-deploy/scripts/install-usbboot-on-host.sh
This installs dependencies, clones usbboot, builds it, and copies rpiboot to /opt/usbboot/.
2. Enable root SSH and add your SSH key to LXC 201
No root password is set by default. To log in as root over SSH:
-
Option A – Use the setup script (recommended): From your machine (with SSH key and optional password):
# Add your default SSH key (~/.ssh/id_ed25519.pub or id_rsa.pub) and enable root SSH ./chromium-setup/emmc-provisioning/scripts/setup-lxc-ssh.sh root@10.130.60.224 # Or specify key file and set root password ROOT_PASSWORD='YourPassword' ./chromium-setup/emmc-provisioning/scripts/setup-lxc-ssh.sh root@10.130.60.224 ~/.ssh/id_ed25519.pubThen connect with
ssh root@<LXC-IP>(script prints the IP). Get the IP anytime with:
ssh root@10.130.60.224 "pct exec 201 -- hostname -I" -
Option B – Manual:
ssh root@10.130.60.224thenpct exec 201 -- bashto get a shell in the container. Runapt-get install -y openssh-server, edit/etc/ssh/sshd_configto setPermitRootLogin yes, runpasswdto set root password, add your key to/root/.ssh/authorized_keys, and restartssh.
3. (Optional) Store backup images on a host directory
To keep backup images on a specific host path (e.g. a large disk or NFS mount) instead of under /var/lib/cm4-provisioning/backups, deploy with CM4_BACKUPS_HOST_PATH set. That directory is created on the host, bind-mounted into the LXC at /var/lib/cm4-provisioning/backups, and the host flash script is configured to write backups there. The dashboard in the LXC then lists and serves those same files.
Deploy with a host backup path:
CM4_BACKUPS_HOST_PATH=/mnt/storage/cm4-backups ./chromium-setup/emmc-provisioning/scripts/deploy-to-proxmox.sh root@10.130.60.224
Create /mnt/storage/cm4-backups (or your path) on the host first if it doesn’t exist; the deploy script will create it if possible. To add or change the backup mount on an already-deployed host, set CM4_BACKUPS_HOST_PATH and run the deploy script again, then on the host add BACKUPS_DIR=<path> to /opt/cm4-provisioning/env and add the bind mount (see deploy script for the pct set 201 -mp1 ... step).
4. Put the golden image on the host (or in the LXC)
The image must be at /var/lib/cm4-provisioning/golden.img on the host. Because that directory is bind-mounted into the LXC, you can use either:
-
From the host:
scp your-golden.img root@10.130.60.224:/var/lib/cm4-provisioning/golden.img -
From the LXC (e.g. after copying the image into the container elsewhere first):
pct exec 201 -- ls -la /var/lib/cm4-provisioning/ # Copy to that path inside the container; it's the same as the host path.
5. Run the provisioning dashboard (optional)
The dashboard shows connection steps and live deployment status (idle / connecting / flashing / done / error) and a recent flash log. It reads the same status.json and flash.log that the host’s flash script writes (via the bind-mounted /var/lib/cm4-provisioning).
Inside LXC 201:
# Copy dashboard into the container (from host, if you have the repo there)
# Or from your workstation:
# rsync -a chromium-setup/emmc-provisioning/dashboard/ root@10.130.60.224:/tmp/dashboard/
# ssh root@10.130.60.224 "pct push 201 /tmp/dashboard/app.py /opt/cm4-provisioning/dashboard/ && pct push 201 /tmp/dashboard/cm4-dashboard.service /opt/cm4-provisioning/dashboard/ && pct exec 201 -- mkdir -p /opt/cm4-provisioning/dashboard/templates && ..."
# Inside the LXC (pct exec 201 -- bash):
apt-get update && apt-get install -y python3-flask
mkdir -p /opt/cm4-provisioning/dashboard/templates
# Copy app.py, templates/index.html, cm4-dashboard.service into the container (see dashboard/README.md)
cp /opt/cm4-provisioning/dashboard/cm4-dashboard.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable --now cm4-dashboard
Then open http://<LXC-201-IP>:5000 (get the IP with pct exec 201 -- hostname -I). If the LXC is on a private network, set up port forwarding on the Proxmox host or use a reverse proxy so you can reach the dashboard from your browser.
6. Optional: disable or enable auto-flash
-
Disable:
ssh root@10.130.60.224 "rm /etc/cm4-provisioning/enabled" -
Enable again:
ssh root@10.130.60.224 "touch /etc/cm4-provisioning/enabled"
Usage
- Place the reTerminal in boot mode (eMMC disable jumper).
- Connect its USB slave port to the Proxmox host (not to the LXC).
- Power the reTerminal (or connect after power).
- On the host, udev will run the trigger and then the flash script (rpiboot, then dd). Watch logs:
ssh root@10.130.60.224 "journalctl -u cm4-flash-once -f" # or ssh root@10.130.60.224 "journalctl -t cm4-flash -f" - When flashing finishes, remove the jumper and power cycle the reTerminal so it boots from eMMC.
Monitoring from the host
From the Proxmox host you can monitor:
| What | How |
|---|---|
| USB device | lsusb — CM4 in boot mode shows as 2b8e (RPi) or 0a5c:2711 (Broadcom BCM2711) |
| Live status | cat /var/lib/cm4-provisioning/status.json — same JSON the dashboard shows (phase, message, error) |
| Flash log | tail -f /var/lib/cm4-provisioning/flash.log — script log (rpiboot, dd, errors) |
| Flash job | systemctl status cm4-flash-once — whether the udev-triggered job is running/failed |
| Journal | journalctl -u cm4-flash-once -f or journalctl -t cm4-flash -f — systemd/log output |
| Block devices | lsblk — after rpiboot, the eMMC appears as a new disk (e.g. /dev/sdb) |
| Backups | ls /var/lib/cm4-provisioning/backups/ — backup images (on host; if you used CM4_BACKUPS_HOST_PATH they are under that path on the host, bind-mounted into the LXC). To shrink automatically, set SHRINK_BACKUP=1 in /opt/cm4-provisioning/env — see EMMC-PROVISIONING-GUIDE.md § Shrinking backup and golden images. |
| Config | cat /opt/cm4-provisioning/env — GOLDEN_IMAGE, RPIBOOT_DIR, EMMC_SIZE_BYTES |
One-command snapshot:
# From your machine (stream script to host):
ssh root@10.130.60.224 'bash -s' < chromium-setup/emmc-provisioning/scripts/monitor-from-host.sh
Or copy scripts/monitor-from-host.sh to the host and run ./monitor-from-host.sh for a full status dump (USB, status.json, flash unit, last log lines, block devices, config).
Troubleshooting: device connected but not shown in portal
-
Host has old flash script – The script must not exit when the golden image is missing (so you can use Backup first). Update the host:
scp chromium-setup/emmc-provisioning/host/flash-emmc-on-connect.sh root@10.130.60.224:/opt/cm4-provisioning/ ssh root@10.130.60.224 "chmod +x /opt/cm4-provisioning/flash-emmc-on-connect.sh" -
Unplug and replug the USB – udev runs the trigger only when the device is added. Unplug the reTerminal USB (keep it in boot mode), then plug it back in. The trigger will run the script and rpiboot; when the eMMC is exposed, the portal shows "Device connected" with Backup/Deploy.
-
If rpiboot fails – Check on the host:
ssh root@10.130.60.224 'tail -30 /var/lib/cm4-provisioning/flash.log'(rpiboot stderr is appended there). Try unplug/replug again. To see the exact rpiboot error:ssh root@10.130.60.224 '/opt/usbboot/rpiboot -d /opt/usbboot/mass-storage-gadget64'(device connected; Ctrl+C to stop). Runscripts/monitor-from-host.shfor a full snapshot. -
"No 'bootcode' files found in mass-storage-gadget64" – Usually because
bootfiles.binis a broken symlink (e.g.-> ../firmware/bootfiles.bin) and that target doesn’t exist. Fix on host: runscripts/fix-gadget-bootcode-on-host.shon the host (it removes the symlink and extractsbootcode4.binfrom the installed rpiboot binary). From your machine:ssh root@10.130.60.224 'bash -s' < scripts/fix-gadget-bootcode-on-host.sh. Alternative: repopulate the gadget dir with./scripts/populate-gadget-on-host.sh root@10.130.60.224, or full reinstall with./scripts/build-and-deploy-usbboot-to-host.sh root@10.130.60.224. Then verify:ls -la /opt/usbboot/mass-storage-gadget64/(should list a realbootcode4.binorbootfiles.bin, plusboot.img,config.txt). -
Clear stuck error in portal – If the portal shows an old error (e.g. "Golden image not found" or "rpiboot failed"), click Clear message in the dashboard, or:
ssh root@10.130.60.224 "echo '{\"phase\":\"idle\",\"message\":\"Waiting for reTerminal in boot mode or network.\",\"progress\":null}' > /var/lib/cm4-provisioning/status.json". Then unplug/replug the device. -
Backup stops before finishing – If backup or shrink appears to stop partway (e.g. dashboard stuck on "Creating backup…" or "Shrinking…"), the service may have been killed by systemd. The
cm4-flash.serviceunit usesTimeoutStartSec=7200(2 hours); if you deployed an older version with 15 minutes, redeploy so the host gets the updated unit, then on the host runsystemctl daemon-reloadso the next backup has enough time to complete. -
Trigger now runs the flash script in the background (not via systemd-run) so it can access the USB device; a 2s delay gives the device time to enumerate before rpiboot runs.
Redeploy / update scripts
From your repo (e.g. after changing scripts):
./chromium-setup/emmc-provisioning/scripts/deploy-to-proxmox.sh root@10.130.60.224
That script syncs the repo to the host and reinstalls scripts on both the host and LXC 201. It does not overwrite /opt/cm4-provisioning/env or /etc/cm4-provisioning/enabled if you’ve changed them; adjust the script if you want that. It also does not build usbboot; run install-usbboot-on-host.sh on the host when needed.
Summary
| Item | Location |
|---|---|
| LXC | 201, hostname cm4-provisioning, Proxmox 10.130.60.224 |
| Golden image | /var/lib/cm4-provisioning/golden.img (host and LXC see the same file) |
| Flash runs on | Proxmox host (udev + rpiboot + dd) |
| Build rpiboot on host | Run scripts/install-usbboot-on-host.sh on the host when it has internet |
| Dashboard | Flask app in LXC at http://<LXC-IP>:5000; switch Flash/Backup mode, list and download backups; see dashboard/README.md and section 3 above |
| Backups | Saved under /var/lib/cm4-provisioning/backups/ (optionally a host path bind-mounted into the LXC — set CM4_BACKUPS_HOST_PATH at deploy). When a device is detected, choose Backup or Deploy in the dashboard. |
| Network deploy/backup | Network-booted devices run network-client/provisioning-client.sh and register with the dashboard; they then appear under "Device detected (Network)" and you choose Backup or Deploy. See network-client/README.md. |