5.8 KiB
5.8 KiB
How network boot deployment works
This describes the full flow from power-on to eMMC deploy/backup when using network boot with the provisioning LXC.
Overview
- reTerminal is set to try network boot first (EEPROM
BOOT_ORDER=0x21). - It is connected to the same LAN as the LXC’s eth1 (e.g. 10.20.50.0/24).
- On power-on it gets an IP via DHCP and loads boot files via TFTP from the LXC.
- The netboot environment (kernel + rootfs) runs provisioning-client.sh, which registers with the dashboard and polls for an action.
- In the dashboard you see the device under “Device detected (Network)” and choose Deploy or Backup.
- The device performs the action (download image → write eMMC, or read eMMC → upload), then you can reboot to run from eMMC.
Step-by-step
1. LXC (provisioning server)
- eth0 = WAN (e.g. 10.130.60.141), internet for the LXC.
- eth1 = LAN (e.g. 10.20.50.1/24):
- dnsmasq: DHCP on eth1 (e.g. 10.20.50.100–200) and TFTP with next-server = 10.20.50.1, boot file =
start4cd.elf. - TFTP root
/srv/tftpboot: Raspberry Pi 4/CM4 boot files (from GitHub: start4cd.elf, fixup4cd.dat, kernel8.img, etc.). - NAT: traffic from 10.20.50.0/24 is masqueraded out eth0 so netbooted devices have internet if needed.
- dnsmasq: DHCP on eth1 (e.g. 10.20.50.100–200) and TFTP with next-server = 10.20.50.1, boot file =
The dashboard (Flask) runs in the LXC and is reachable at e.g. http://10.20.50.1:5000 from the LAN. The golden image for Deploy lives at /var/lib/cm4-provisioning/golden.img (same LXC or bind-mounted from host).
2. reTerminal (device)
- EEPROM:
BOOT_ORDER=0x21(network first, then SD/eMMC). Can be set by cloud-init first-boot on an already-flashed device. - Network: Ethernet connected to the same segment as the LXC’s eth1 (e.g. same switch/VLAN as 10.20.50.0/24).
- On power-on:
- Pi 4/CM4 firmware does DHCP on the wired interface.
- DHCP reply gives: IP (e.g. 10.20.50.100), next-server (TFTP) = 10.20.50.1, boot filename = start4cd.elf.
- Device TFTPs boot files from the LXC (start4cd.elf, fixup4cd.dat, kernel, DTB, etc.).
- It boots the kernel (and optionally an initramfs or NFS root). That environment must have network, curl, and provisioning-client.sh.
3. Netboot root / environment
The TFTP-loaded kernel (and optional initramfs/NFS root) must end up in an environment where:
- The device has an IP on the same LAN as the LXC (already from DHCP).
- provisioning-client.sh is present and run (e.g. from init, a login script, or a systemd service).
- PROVISIONING_SERVER is set to the dashboard URL on the LXC’s LAN IP, e.g.
PROVISIONING_SERVER=http://10.20.50.1:5000
So the “netboot environment” is either:
- A custom initramfs (recommended): build with network-boot-initramfs/build.sh, copy initrd.img to the TFTP root, and add
initramfs initrd.img followkernelto config.txt. The initramfs brings up the network and runs the provisioning client. See network-boot-initramfs/README.md. - A minimal rootfs (e.g. NFS) that runs the client script at boot, or
- Any other setup that gets the client running with network and the right
PROVISIONING_SERVER.
4. Provisioning client (on the device)
- provisioning-client.sh:
- Registers:
POST /api/register-devicewith MAC and IP. - Polls:
GET /api/device-action-poll?mac=...every few seconds. - When the dashboard returns action = deploy (with url):
downloads the image from url and runsdd of=/dev/mmcblk0. - When the dashboard returns action = backup (with upload_url):
runsdd if=/dev/mmcblk0and POSTs the stream to upload_url. - Then exits (and you can reboot to eMMC after deploy).
- Registers:
5. Dashboard (your actions)
- You open the dashboard at
http://10.20.50.1:5000(or the LXC’s WAN IP if you’re not on the provisioning LAN). - Under “Device detected (Network)” you see the device (identified by MAC).
- You click Deploy or Backup.
- The dashboard sets the action (and URL/upload_url) for that MAC; the next device-action-poll returns it, and the client runs the corresponding dd + curl.
Data flow summary
| Stage | Where | What happens |
|---|---|---|
| Boot | reTerminal | DHCP (get IP + next-server + boot file), then TFTP (load start4cd.elf, kernel, etc.). |
| Boot | reTerminal | Kernel (and netboot root) start; run provisioning-client.sh with PROVISIONING_SERVER=http://10.20.50.1:5000. |
| Register | Device → LXC | POST /api/register-device (MAC, IP). |
| Poll | Device → LXC | GET /api/device-action-poll?mac=... every 5 s. |
| Your choice | You → LXC | In dashboard: click Deploy or Backup for that device. |
| Deploy | LXC → device | Client GETs image URL, streams to dd of=/dev/mmcblk0. |
| Backup | Device → LXC | Client dd if=/dev/mmcblk0 and POSTs to upload_url. |
| After | reTerminal | Reboot; if you deployed, it can now boot from eMMC. |
What you need in place
- LXC: eth1 = 10.20.50.1/24, dnsmasq (DHCP + TFTP on eth1),
/srv/tftpbootwith RPi 4 boot files, NAT for 10.20.50.0/24 via eth0. Dashboard running,golden.imgpresent for Deploy.
See NETWORK-BOOT-LXC.md and setup-network-boot-on-lxc.sh. - reTerminal: EEPROM boot order = network first; Ethernet on 10.20.50.0/24; netboot environment that runs provisioning-client.sh with
PROVISIONING_SERVER=http://10.20.50.1:5000. - Netboot root: Must provide network, curl, and the client script (NFS, initramfs, or custom root).
The TFTP setup only gets the Pi to boot a kernel (and optional root). The provisioning (Deploy/Backup) is done by that kernel’s environment running the network-client against the dashboard on the LXC.