Packer and Docker images on ARM64
ARM64/AARCH64 is all over the place these days, here’s how you can get building for Raspberry PI or graviton
Packer
Packer works the same way on ARM64 as it does on X86_64. If you have a build that works like this one (Debian12) Then your in business. Easiest to just find an ARM64 box and run packer.
Coming up with the build instructions has some challenges though, ARM64 differs architecturally from X86_64. Notably in terms of booting and power management.
Also debugging a packer build involves looking in a few places. I wasn’t able to get it working the way I wanted over SSH, but the following process is good enough if I don’t think about it too much.
Build environment
- Raspberry PI 5 running Raspberry Pi OS from a USB NvME
- Follow the Debian instructions to install KVM, note the
--no-install-recommends
is important for headless setups - Also install the EFI boot for AARCH64:
apt install qemu-efi-aarch64
and some extra utils for qemuapt install qemu-utils
Seeing what’s going on
There’s 4 phases to Debian installer bootup on ARM64:
- GRUB2 installer screen
- Linux kernel framebuffer boot
- Linux kernel framebuffer switch to invisible display (I gave up fixing this)
- Serial console
Since packer needs to be able to type for us blindly into a terminal, we need to be able to see what’s going on so we can debug things to come up with our packer script and tweaked debian preseed.
Using the above linked file, I was able to connect over VNC for steps 1 and 2, but after that, I also had to tail -f serial.log
in the same directory to follow step 4 onwards and see the messages from the Debian 12 installer.
Test boot image
Once the image has been generated, best to test it before uploading to nexus or using it.
This works nicely over ssh and drops you into a serial console if the image is good to go.
KVM/libvirt
Libvirt XML for ARM64 vms is different to an X86_64 VM since the hardware to emulate is different. There’s enough differences to make it work maintaining separate templates for ARM64 vs X86_64 if you are using ansible to deploy or something. My Template looks like this:
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>{{ vm_name }}</name>
<memory unit='MiB'>{{ vm_def.memory }}</memory>
<currentMemory unit='MiB'>{{ vm_def.memory }}</currentMemory>
<vcpu placement='static'>{{ vm_def.cpus }}</vcpu>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='aarch64' machine='virt-7.2'>hvm</type>
<loader readonly='yes' type='pflash'>/usr/share/AAVMF/AAVMF_CODE.fd</loader>
<nvram>/var/lib/libvirt/qemu/nvram/{{ vm_name }}_VARS.fd</nvram>
<boot dev='hd'/>
<smbios mode="sysinfo"/>
</os>
<emulator>/usr/bin/qemu-system-aarch64</emulator>
<features>
<acpi/>
</features>
<cpu mode='host-passthrough' />
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' discard='unmap'/>
<source file='{{ kvm_image_dir }}/{{ vm_def.image_file}}' index='2'/>
<backingStore/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
</disk>
{% for host_disk_device in vm_def.host_disk_devices | default([]) -%}
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source dev='{{ host_disk_device.source_dev }}'/>
<target dev='{{ host_disk_device.target_dev }}' bus='virtio'/>
</disk>
{% endfor %}
{% for networ_interface in vm_def.network_interfaces -%}
<interface type='bridge'>
<source bridge='{{ networ_interface.source_bridge }}'/>
<mac address='{{ networ_interface.mac_address }}'/>
<model type='virtio'/>
</interface>
{% endfor %}
<console type='pty'>
<target port='0'/>
</console>
<serial type='pty'>
<target port='0'/>
</serial>
<controller type='usb' index='0' model='qemu-xhci'/>
<input type='mouse' bus='usb'/>
<input type='keyboard' bus='usb'/>
<rng model='virtio'>
<backend model='random'>/dev/urandom</backend>
<alias name='rng0'/>
</rng>
</devices>
<!--
KVM VM name via BIOS asset tag
* https://serverfault.com/a/1054641
* https://gist.github.com/Informatic/49bd034d43e054bd1d8d4fec38c305ec?permalink_comment_id=3593536
* name in /sys/class/dmi/id/chassis_asset_tag
-->
<!--
<qemu:commandline>
<qemu:arg value='-smbios'/>
<qemu:arg value='type=3,asset={{ vm_name }}'/>
</qemu:commandline>
-->
<sysinfo type='smbios'>
<chassis>
<entry name='asset'>{{ vm_name }}</entry>
</chassis>
</sysinfo>
</domain>
Feel free to borrow.
Docker Images
You can build ARM64 images on your Intel laptop, Docker calls this multi-platform builds, however, since I had a spare raspberry pi just sitting there, its easy enough to just follow the normal install instructions and build images on that spare raspberry pi. Thinking about this, I could also have just used podman since its easier to install
Closing
With these processes in place, its easy and fast to create VM’s on Raspberry PI to play with things outside of the main lab or anything else you care about - perfect for testing your friend’s vibe coded project(!)