<message>Enhance the README files to provide clearer instructions for deploying the CM4 eMMC provisioning service on Proxmox, including detailed prerequisites and deployment steps. Update the .gitignore to exclude deploy logs generated during the deployment process, ensuring a cleaner repository. Additionally, refine archived documentation for better historical context and clarity on the active provisioning workflow.
3. Provision devices via USB boot or network boot; first-boot configures the Chromium kiosk, labwc Wayland desktop, screen rotation, wallpaper, dark theme, and CM4 boot order.
This folder holds files that are no longer part of the active reTerminal DM4 / eMMC provisioning workflow. Kept for reference only.
This folder holds files that are no longer part of the active reTerminal DM4 / eMMC provisioning workflow. Kept for historical reference only.
| Subfolder | Contents |
| Subfolder | Contents |
|-----------|----------|
|-----------|----------|
| **chromium-setup-legacy/** | Old Chromium-setup guides and scripts: KDE installation, LED/buzzer control, audio config, touchscreen options, Flask apps, test scripts, revert-to-lxde. Kiosk assets (start-chromium.sh, chromium-kiosk.desktop) live in`emmc-provisioning/cloud-init/` and `emmc-provisioning/cloud-init/config-files/`. |
| **chromium-setup-legacy/** | Old Chromium-setup guides and scripts: KDE installation, LED/buzzer control, audio config, touchscreen options, Flask apps, test scripts, revert-to-lxde. The active kiosk launcher lives at`emmc-provisioning/cloud-init/fileserver/start-chromium.sh` (Wayland/labwc). |
| **cloud-init-duplicates/** | Duplicate or superseded cloud-init files (e.g. plymouth-custom.script duplicate of `files-from-guard/plymouth-custom/custom.script`). |
Do not rely on archived files for deployment; use the main tree under **emmc-provisioning/**.
Do not rely on archived files for deployment; use the active tree under **emmc-provisioning/**.
2.**Deploy to a new Proxmox:** Follow [docs/DEPLOY-NEW-PROXMOX.md](docs/DEPLOY-NEW-PROXMOX.md) for clear step-by-step instructions.
2.**Full setup reference:** [docs/EMMC-PROVISIONING-GUIDE.md](docs/EMMC-PROVISIONING-GUIDE.md) for golden image creation, cloud-init, PiShrink.
3.**Proxmox reference:** [scripts/deploy-to-proxmox.sh](scripts/deploy-to-proxmox.sh) and [docs/PROXMOX-LXC-DEPLOYMENT.md](docs/PROXMOX-LXC-DEPLOYMENT.md) for options, layout, and troubleshooting.
3.**Redeploy / update:** Re-run `scripts/deploy-to-proxmox.sh root@HOST` — updates scripts, dashboard, udev, and systemd without touching your golden image or enabled flag.
4.**Manual host:** Copy scripts from `host/` to the host and install the udev rule (see the guide).
4.**Sync portal files:** After deploy or when kiosk/first-boot assets change: `scripts/sync-portal-files-to-lxc.sh root@<LXC-IP>`.
5.Put **golden.img** in `/var/lib/cm4-provisioning/` (or your configured path). When a device is detected (USB or network), the **dashboard** asks **Backup** or **Deploy**.
5.**Troubleshooting:** [docs/PROXMOX-LXC-DEPLOYMENT.md](docs/PROXMOX-LXC-DEPLOYMENT.md) for USB errors, rpiboot failures, and monitoring. [docs/NETWORK-BOOT-TROUBLESHOOTING.md](docs/NETWORK-BOOT-TROUBLESHOOTING.md) for network boot issues.
# Deploy CM4 eMMC Provisioning to a New Proxmox Instance
# Deploying the CM4 eMMC Provisioning Stack to Proxmox
Step-by-step guide to deploy the provisioning service (host + LXC) on a **new** Proxmox server. For redeploy/update and troubleshooting, see [PROXMOX-LXC-DEPLOYMENT.md](PROXMOX-LXC-DEPLOYMENT.md).
Complete step-by-step guide for deploying the provisioning service (Proxmox host + LXC container) on a new or existing Proxmox server. Covers host preparation, network bridge configuration, LXC deployment, post-deploy setup, network boot, and first-boot asset sync.
For reference details (troubleshooting, redeploy, architecture), see [PROXMOX-LXC-DEPLOYMENT.md](PROXMOX-LXC-DEPLOYMENT.md).
---
---
## Prerequisites (before running the deploy script)
## Overview
| Requirement | Details |
The provisioning stack consists of two parts that work together:
|-------------|---------|
| **Proxmox host** | A Proxmox VE node (new or existing) where you want the service. |
| **SSH as root** | You must be able to run `ssh root@YOUR_PROXMOX_HOST` with **key-based auth** (no password prompt). |
| **Proxmox storage** | At least one active storage (e.g. `local` or `local-lvm`). Check on the host: `pvesm status`. |
| **Host internet** (recommended) | Needed so the deploy script can download the Debian 12 LXC template (if missing), and install **usbboot** and **PiShrink** on the host. Without internet, deploy still runs but you must install usbboot and PiShrink manually later. |
**Optional (set before deploy):**
| Component | Where it runs | What it does |
|-----------|--------------|--------------|
| **Host scripts + udev** | Proxmox host | Detects CM4 over USB, runs `rpiboot`, then `dd` to write/read the eMMC |
| **LXC container** (`cm4-provisioning`) | Proxmox LXC | Runs the Flask dashboard on port 5000; serves portal files and golden images |
-`DEPLOY_ROOTFS_STORAGE=local-lvm` — Skip interactive storage choice when creating the LXC.
The host and LXC share `/var/lib/cm4-provisioning/` via a bind-mount, so images and status files are visible from both.
-`DEPLOY_LXC_ROOT_PASSWORD=yourpassword` — Set LXC root password and enable SSH.
-`DEPLOY_LXC_SSH_KEY=/path/to/pub` — Copy this key into the LXC (default: `~/.ssh/id_ed25519.pub` or `id_rsa.pub`).
```
-`CM4_BACKUPS_HOST_PATH=/mnt/storage/cm4-backups` — Store backups on this host path (create the directory on the host if needed).
`DEPLOY_LXC_LAN_BRIDGE=vmbr1`, `DEPLOY_LXC_LAN_SUBNET=10.20.50.1/24` — To add eth1 as provisioning LAN. **Set these if you want the portal reachable from the LAN** (e.g. http://10.20.50.1:5000); the dashboard listens on all interfaces.
From your **workstation** (where the repo is cloned), run:
### 1.1 Hardware & OS
- Proxmox VE 7 or 8 installed on a physical machine.
- At least one USB port accessible to the host (not passed through to a VM) for the reTerminal USB slave cable.
- At least one active storage (local or local-lvm). Check: `pvesm status`.
- Internet access on the host (needed for initial usbboot/PiShrink install and LXC template download).
### 1.2 SSH key access from your workstation
The deploy script connects as `root` using key-based auth. Set this up if not already done:
```bash
# On your workstation — copy your public key to the Proxmox host
ssh-copy-id root@YOUR_PROXMOX_HOST
# Verify
ssh root@YOUR_PROXMOX_HOST "echo OK"
```
### 1.3 Proxmox network bridges
The LXC needs at minimum one bridge for WAN access. For provisioning LAN (DHCP to devices), it needs a second bridge.
#### WAN bridge (required)
The default WAN bridge is `vmbr0`, which is created automatically by Proxmox during installation and connects to your primary network. No extra configuration needed.
#### LAN bridge for provisioning (required for network boot / device DHCP)
If you want the LXC to serve DHCP, TFTP, and DNS on a dedicated provisioning LAN (so devices can network-boot), create a second Linux bridge on the Proxmox host:
1. Open **Proxmox Web UI → Node → Network → Create → Linux Bridge**.
2. Set:
- **Name:** `vmbr1` (or any unused bridge name)
- **Bridge ports:** the physical NIC connected to your provisioning LAN switch (e.g. `enp2s0`). Leave blank for an internal-only bridge (useful for testing with no physical switch).
- **IPv4/CIDR:** leave blank (the LXC handles the IP on this bridge, not the host).
- **Autostart:** checked.
3. Click **Create**, then **Apply Configuration**.
> **Note:** If you connect the reTerminals via a switch that is also connected to `enp2s0`, traffic flows directly. If there is no physical NIC to dedicate, leave bridge ports blank and connect all provisioning devices as VMs/LXCs on the same internal bridge.
---
## Part 2 — Running the Deploy Script
From your **workstation** (where this repo is cloned), run a single command to deploy everything:
- On **first run**, the script will ask you to choose LXC rootfs storage (unless `DEPLOY_ROOTFS_STORAGE` is set). It then creates the LXC, installs host scripts, udev, systemd units, and the dashboard in the LXC.
### 2.3 What the deploy script does
- The script prints **LXC IP (WAN)** and, if you set `DEPLOY_LXC_LAN_BRIDGE`, **LXC IP (LAN)**. The portal is reachable at `http://<IP>:5000` on both; use the LAN IP from devices on the provisioning LAN.
The script runs five stages:
1.**Check** — SSHes to the host, finds existing `cm4-provisioning` container by hostname (or lists storage for new container creation).
2.**Clean + Rsync** — Wipes `/tmp/emmc-provisioning-deploy` on the host and rsyncs the entire repo there (excluding `.git` and deploy logs).
- Installs `python3-flask` and `openssh-server` in the LXC (skipped if already present).
- Deploys the Flask dashboard and enables/restarts `cm4-dashboard.service` in the LXC.
- Installs usbboot (`rpiboot`) on the host if not already present.
- Installs PiShrink on the host if not already present.
4.**LXC start** — Starts the LXC if stopped.
5.**Summary** — Prints LXC WAN IP (and LAN IP if set), dashboard URL, and remaining manual steps.
On **redeploy** (container already exists): host scripts, dashboard, env, systemd, and udev are always updated. LXC creation, bind-mounts, apt installs, usbboot, and PiShrink are skipped when already present.
---
---
## Step 2: Install usbboot on the host (if host had no internet during deploy)
## Part 3 — Post-Deploy: Required Manual Steps
USB flash/backup needs **rpiboot** on the Proxmox **host**. If the deploy log said usbboot install failed or was skipped:
systemctl status cm4-build-cloudinit.path cm4-shrink.path
# Check auto-flash is enabled
ls /etc/cm4-provisioning/enabled
```
```
---
### Step 3: Add a golden image
## Step 3: Add a golden image (required for Deploy)
A **golden image** is required for **Deploy** (writing an image to the device's eMMC). Backup (reading from device) works without it.
To **write** an image to a device (Deploy), the host must have a **golden image** at `/var/lib/cm4-provisioning/golden.img`. Backup (read from device) works without it.
**Option A — Build via the dashboard:**
1. Open `http://<LXC-IP>:5000` → Admin tab.
**Option A — From the dashboard**
2. Click **Build cloud-init image**: the host downloads the latest Raspberry Pi OS, injects your cloud-init `user-data`, and creates `golden.img`.
3. Click **Set as golden** once the build finishes.
1. Open **http://<LXC-IP>:5000** (use the LXC IP from the deploy output).
2. Build a cloud-init image or upload/set an existing backup as golden (see dashboard Admin).
In the dashboard Admin → Images tab, select a backup and click **Set as golden**.
## Accessing the portal from the LAN
### Step 4: Enable SSH into the LXC (optional)
The dashboard listens on **all interfaces** (`0.0.0.0:5000`), so it is reachable on both WAN and LAN IPs when the LXC has two networks.
If you ran the deploy with `DEPLOY_LXC_ROOT_PASSWORD` or a default SSH key, the LXC already has SSH enabled. Otherwise:
- **Deploy with a LAN interface:** set `DEPLOY_LXC_LAN_BRIDGE=vmbr1` (and optionally `DEPLOY_LXC_LAN_SUBNET=10.20.50.1/24`) when running the deploy script. The LXC will get eth1 with the LAN IP (e.g. 10.20.50.1).
```bash
- **From the provisioning LAN:** open **http://<LAN-IP>:5000** (e.g. http://10.20.50.1:5000). Devices on that subnet can use the portal without going through WAN.
# From your workstation — adds your default SSH key and enables root SSH
- If you did not set a LAN bridge at deploy time, you only have one IP (WAN); use that for the portal. To add LAN later you would need to add eth1 to the container and reconfigure (see PROXMOX-LXC-DEPLOYMENT.md).
If you set `DEPLOY_LXC_ROOT_PASSWORD` or had a default SSH key, you can already run:
Then connect:
```bash
```bash
ssh root@<LXC-IP>
ssh root@<LXC-IP>
```
```
Otherwise, enable root SSH and add your key:
---
## Part 4 — Sync Portal Files (First-Boot Assets)
The LXC serves first-boot assets (kiosk scripts, desktop files, splash, theme, etc.) from `/var/lib/cm4-provisioning/portal-files/`. These must be synced from the repo.
This rsyncs everything under `emmc-provisioning/cloud-init/fileserver/` to the LXC portal-files directory. Run this every time you update kiosk assets or first-boot scripts.
- DNS: static record `file.server` → LAN gateway IP, so first-boot scripts can reach `http://file.server/...`.
- DHCP option 6: sends LXC as DNS server to all DHCP clients.
3.**Configures extra IPs on eth1**: `192.168.30.1/24`, `192.168.127.1/24` (for serving multiple subnets).
4.**Creates VLAN 40** interface `eth1.40` at `192.168.0.1/24` (for VLAN-tagged networks on the provisioning LAN).
5.**Enables IP forwarding** (`net.ipv4.ip_forward=1`) persisted in `/etc/sysctl.d/`.
6.**Configures NAT** (nftables or iptables fallback): masquerades all LAN traffic out eth0 so devices on the provisioning LAN get internet access.
7.**Enables and starts dnsmasq**.
Config files written:
-`/etc/dnsmasq.d/network-boot.conf` — DHCP + DNS on eth1
-`/etc/nftables.d/nat-lan.conf` — NAT rules
-`/etc/network/interfaces.d/70-cm4-extra-lan` — extra IPs and VLAN persisted
### 5.3 Enable PXE / TFTP network boot
TFTP is not enabled by default (dnsmasq is configured for DHCP + DNS only). To enable PXE/TFTP so devices can load a kernel and initramfs over the network:
```bash
# SSH into the LXC
ssh root@<LXC-IP>
# Enable PXE/TFTP (adds the PXE options to dnsmasq and restarts it)
For a reTerminal to boot from the network (when eMMC is empty or network boot order is set), its EEPROM `BOOT_ORDER` must include network boot. The recommended order is `0xf21` (eMMC first, then network):
```bash
# Check current EEPROM boot order on a connected device
Or from your machine (stream the script): use the same pattern as in [PROXMOX-LXC-DEPLOYMENT.md](PROXMOX-LXC-DEPLOYMENT.md) for `install-pishrink-on-host.sh`.
---
---
## Summary checklist
## Part 8 — Updating / Redeploying
| Step | Action | Required? |
To push code changes (scripts, dashboard, udev, systemd) to an existing deployment:
|------|--------|------------|
| 1 | Run `deploy-to-proxmox.sh root@YOUR_PROXMOX_HOST` | **Yes** |
| 2 | Install usbboot on host (if deploy couldn’t) | For USB flash/backup |
| 3 | Add `golden.img` for Deploy | For Deploy only |
| 4 | SSH to LXC (or use setup-lxc-ssh.sh) | Optional |
| 5 | Run setup-network-boot-on-lxc.sh (if using eth1 LAN) | Optional |
| 6 | Install PiShrink on host (if deploy couldn’t) | For Shrink/Compress |
- **Dashboard:** http://<LXC-IP>:5000 (WAN). If you set `DEPLOY_LXC_LAN_BRIDGE`, also **http://<LAN-IP>:5000** (e.g. http://10.20.50.1:5000) from the LAN.
The script finds the container by hostname (`cm4-provisioning`) and updates all files. It does **not** overwrite your `golden.img` or `/etc/cm4-provisioning/enabled`.
- **Golden image path (host and LXC):** `/var/lib/cm4-provisioning/golden.img`
**If you see "rpiboot failed or no device connected":** The error is from the **Proxmox host** (where USB is connected). On the host run: `tail -50 /var/lib/cm4-provisioning/flash.log` to see the real rpiboot message. Ensure the reTerminal is in **boot mode** (eMMC disable jumper, USB slave port), then unplug/replug. See [PROXMOX-LXC-DEPLOYMENT.md](PROXMOX-LXC-DEPLOYMENT.md) § "If rpiboot fails" for full steps.
To update **only the dashboard** (faster when only `app.py` or templates changed):
Full reference: [PROXMOX-LXC-DEPLOYMENT.md](PROXMOX-LXC-DEPLOYMENT.md).
For USB flash errors (rpiboot failures, block device not found, USB transfer errors) and LXC/dashboard issues, see the full troubleshooting section in [PROXMOX-LXC-DEPLOYMENT.md](PROXMOX-LXC-DEPLOYMENT.md).
For network boot issues (DHCP not working, device not appearing in dashboard), see [NETWORK-BOOT-TROUBLESHOOTING.md](NETWORK-BOOT-TROUBLESHOOTING.md).
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.