WALT is best known for working with Raspberry Pi boards. Early versions of the software proposed only this kind of node. Virtual nodes were then introduced in WALT 2.0, in April 2019. The main purpose at this time, for us, was to ease the setup of automated tests of the WALT software. But virtual nodes could also be used for experiments. Six years later, various features have been added to virtual nodes, making them even more handy and useful, as this post will recap.

What’s a virtual node in WALT?

A virtual node is a qemu-based virtual machine that runs on the WALT server. Virtual nodes work basically the same as physical nodes. For instance, in their default boot mode, they boot their WALT OS image over the virtual network.

Using virtual nodes is very simple:

  1. One first uses walt node create <name-of-the-node> to create a virtual node with sane default settings. The node immediately starts booting its default OS image.
  2. One may change the node settings, such as the number of CPU cores for instance, by using walt node config.
  3. When it is no longer needed, one can remove it using walt node remove <name-of-the-node>.

Some WALT experiments may use only virtual nodes, such as experiments on virtual ethernet networks. A basic example of this will be introduced below.

Other WALT experiments may mix physical and virtual nodes. For instance, the application server of a LoRaWAN network could be set up on a virtual node, whereas the LoRa gateways and LoRa end-devices could be implemented as add-on boards of Raspberry Pi nodes.

Note that being able to dynamically add and remove nodes is not the only feature specific to virtual nodes. Next sections list the other specific features they provide.

Settings specific to virtual nodes

The command walt node config works for both physical and virtual nodes.1 However, there are a few settings which are only available with virtual nodes:

  • cpu.cores
  • ram
  • boot.delay
  • disks
  • networks

We will present each of these settings below. For getting the current value of all settings, just type walt node config <vnode>.

For more information check out the documentation about device settings.

cpu.cores and ram settings

As their name implies, updating cpu.cores and ram settings allows to change the number of CPU cores (default: 4) and the amount of RAM memory (default: 512M). For instance, for a more powerful virtual node, one could type:

$ walt node config <vnode> cpu.cores=8 ram=2G

boot.delay setting

The boot.delay setting applies a delay each time the virtual node reboots. The default is boot.delay=random, which means a random delay between 1 and 10 seconds is applied. This is useful to lower the server CPU load if many virtual nodes were created and they are all rebooted at once. This is the case when the WALT server itself is rebooted, for instance. If you need a faster reboot on your node, then instead of random you can set an integer number of seconds, possibly zero for removing this delay completely.

disks setting

The disks setting allows to attach a set of virtual disks to the node. The default is disks=none because most WALT experiments do not need disks on the nodes, and the default boot mode (i.e., network boot) does not rely on a disk either. But if you need to add disks to a virtual node, for instance one 256MB and one 1TB, simply type:

$ walt node config <vnode> disks=256M,1T

Each virtual disk is mapped to a file on the WALT server. However, a virtual disk size of 1 TB does not mean that 1 TB of server disk space is used: only the virtual disk space actually used (i.e., written to) by the virtual node impacts the server disk space.

The use of these two disks depends entirely on you and the experience you prepare. So, with this command, the resulting disks are not formatted. You will need to log into the node and use the regular utilities (i.e. fdisk, mkfs, etc.) to configure them.

Another option is to specify one of the “disk templates” we support. For instance, specifying disks=32G[template=ext4] will create a disk 32G large made of one single partition with an ext4 filesystem.2 It is also worth noting that two of the templates we support allow to configure a virtual node for hybrid boot mode, as we showed in a previous blog post.

For the list of possible disk templates or other details check out the related documentation page.

networks setting

Finally, changing the networks settings allows to experiment with virtual ethernet networks. The value must be a list of “network names”, and the default is networks=walt-net. The network name walt-net is a reserved value corresponding to the network of the WALT platform itself (i.e., the network connecting the nodes and the server). So having walt-net in the list is mandatory. But you can add other networks, name them as you wish, and optionally define latency and bandwidth limits, as the following example shows.

$ walt node create vnode1
$ walt node create vnode2
$ walt node config vnode1 networks=walt-net,home-net[lat=10ms]
$ walt node config vnode2 networks=walt-net,home-net[lat=10ms]
$ walt node reboot vnode1,vnode2

After rebooting, vnode1 and vnode2 have one more emulated network interface, allowing them to reach each other through the new network home-net. Unlike walt-net, this new network is not managed by the WALT platform: we have to configure the nodes ourselves. For this example, we will configure static IP addresses.

$ walt node shell vnode1
[...]
root@vnode1:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> [...]
    [...]
2: ens2: <BROADCAST,MULTICAST,UP,LOWER_UP> [...]
    link/ether [...]
    inet 192.168.172.182/22 brd 192.168.175.255 [...]
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe94:cf58/64 scope link 
       valid_lft forever preferred_lft forever
3: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> [...]
    link/ether [...]
    inet6 fe80::5054:ff:fe61:5b16/64 scope link 
       valid_lft forever preferred_lft forever
root@vnode1:~#

Interface ens3 is unconfigured (no IPv4 address), so that must be our interface to network home-net. Let’s configure a static IP address.

root@vnode1:~# ip addr add 192.168.55.2/24 dev ens3
root@vnode1:~# exit

Now we can configure interface ens3 of vnode2 too, with an IP address in the same subnet, and check our config by running a ping.

$ walt node shell vnode2
[...]
root@vnode2:~# ip addr add 192.168.55.3/24 dev ens3
root@vnode2:~# ping 192.168.55.2
PING 192.168.55.2 (192.168.55.2) 56(84) bytes of data.
64 bytes from 192.168.55.2: icmp_seq=1 ttl=64 time=19.7 ms
64 bytes from 192.168.55.2: icmp_seq=2 ttl=64 time=19.7 ms
64 bytes from 192.168.55.2: icmp_seq=3 ttl=64 time=19.7 ms
^C
--- 192.168.55.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 6ms
rtt min/avg/max/mdev = 19.662/19.671/19.686/0.010 ms
root@vnode2:~#

The ping command output confirms that from vnode2 we can properly reach the IP address of vnode1 using the new home-net network link. The reported round trip time is 19.7ms. Since we defined an added latency of 10ms on both nodes, we obtain a value close to 10ms+10ms=20ms as expected.3

Note that the latency setting only applies to outgoing packets, so for instance a setup with lat=20ms on vnode1 and zero added latency on vnode2 would not emulate a realistic network link. You should prefer symmetric values as we did in this example.

Obviously, for a reproducible experiment one will have to automate this IP setup. For instance, the WALT OS image may be edited to configure the static IP address. Or one of the nodes may be configured to start a DHCP service on its new interface.

By defining more nodes and more networks, one can then experiment with all sorts of network topology.

For more information, check out the related documentation page.

Serial consoles

Command walt node console <vnode> allows to interact with the serial console of the virtual node. If you reboot the node by using walt node reboot <vnode> in another terminal window, the console will display the whole bootup procedure, including the countdown related to the boot.delay setting, early network booting steps, kernel and initramfs loading, walt-init scripts, and the startup of OS services.

This view can be very handy when debugging low level experiments having an impact on the OS bootup.

Note: use <ctrl-a> <d> to leave the console.

For more details, check out the documentation.

Simplified ownership process

When the server detects a new physical WALT node, there is no way to know who connected this new node to the WALT network, so this node is first added to the set of “free” nodes. WALT users can then use walt node acquire <node> and walt node release <node> to get or release ownership.

On the contrary, when a WALT user types walt node create <vnode> to create a virtual node, the WALT server knows who typed this command, so the ownership of the virtual node is automatically set.4

Other features and improvements

Since the virtual nodes were first introduced in 2019, many other less visible improvements were introduced, including:

  • Optimizing the bootup time and reducing server CPU load by emulating the network bootloader steps; those steps were very CPU intensive when run inside the virtual machine.
  • Leveraging the KSM feature of the Linux kernel (i.e., “Kernel Samepage Merging”) to be able to start more virtual nodes before reaching server RAM limits.

Thanks to Jérémie Finiel for proofreading. We hope you found this blog post interesting. If you have questions, we can answer you on the mailing list.


  1. We could also mention command walt device config which is a variant of walt node config allowing to configure other devices; as a reminder, we used it to configure network switches in a previous blog post↩︎

  2. Even if the partitioning is already done thanks to the disk template, you still need to update the OS image for automating the mount operation, by editing /etc/fstab for instance. ↩︎

  3. Part of the observed latency is due to OS packet processing in the virtual network, which depends on the power of the WALT server machine. WALT compensates this by configuring a latency value slightly lower than the value the user specifies. Here, the machine is a little more powerful than the average so WALT overestimated the OS latency, resulting in an observed latency value slightly lower than the requested value (19.7ms vs 20ms). In such a case, if you need to get a more exact value, you will have to tune a little the requested value; in this case, setting it to 10.15ms on each node should work. And since floating point values are not accepted, you have to switch to the microsecond unit: lat=10150us↩︎

  4. There is probably no reason to transfer the ownership of a virtual node to someone else, but if ever you need it, using walt node release and then walt node acquire on a virtual node is not forbidden. ↩︎