Recently I had need of an Ubuntu system for developent, in an environment where Windows is the only operating system allowed on work devices. I proposed a virtual machine running Ubuntu, which was accepted. So, I had to set up a VM running Ubuntu, running on Windows.
For consistency with the production server, we ruled out WSL as an option: without being specific about the differences, documentation of WSL indicates the environment is different from native Linux: it can't run everything.
I had experience with qemu on Linux, so I first set up a qemu VM on Windows. This was easy but performance was poor. Without KVM (not supported by the Windows kernel) it is un-accelerated software emulation. It works fine. It's easy to manage. But a bit slow. The developer team was likely to be frustrated.
Guidance I could find on speeding up qemu said the solution was to enable hyper-v and WSL, run Ubuntu in WSL2 and install and run qemu from this Ubuntu. While this might work, I wasn't inclined to run qemu in WSL2. One of the motivations for developing the VM was to eliminate the configuration work for the developers and I wasn't interested to deal with the issues of developing a WSL2 image export/import, if that is even possible.
But Hyper-v is itself virtualisation. Why not run Ubuntu in a Hyper-V VM directly?
It was easy to install Hyper-V and create the VM, booting from an Ubuntu iso, but network configuration failed: despite the documentation, there was not DHCP service on the default switch. Using sniffer, I could see the DHCP requests from the guest, but there was no response. The interface worked, but had to be brought up manually after each boot. I did manage to find documentation for adding a NAT network: add a private switch and configure that, and it was possible to access the Internet from the VM and access the VM from the host using SSH. But still the VM network had to be brought up manually after each boot. The default switch, even if it worked as documented, was for Internet access from the VM and documented to not allow access from host to guest. The recommended solution was an 'external' switch, but that requires two NICs on the host and an external switch to connect them. Hyper-V networking it truly perverse.
Next option was VirtualBox. It's open source, but provided by Oracle and Oracle has a history of adverse management of open source projects under its control. None the less, given the poor performance of qemu and the nearly useless and undocumented networking of Hyper-V, I gave it a go.
And I was pleasantly surprised. VirtualBox is easy to install and it was easy to create the VM. Networking just worked and it was easy to map ports from the host to the guest to allow SSH, HTTP and file share access from the host to the guest. And, after all was done, performance was significantly better than qemu running on Windows.
So, the final solution is Ubuntu running in a VirtualBox VM. It's easy to export/import. Easy to set up on a developer system: just install VirtualBox an import the VM image. Easy to start/stop and change resources (more memory and CPU for those with more grunty workstations) and easy to change the port mapping if access to additional guest services is required.
No comments:
Post a Comment