
Migrate a Windows VM from Hyper-V to Proxmox (and Back Again)
According to some industry commentators, Broadcom's changes to VMware licensing may drive adoption of alternative hypervisors such as Proxmox.
By extension, Proxmox might also be considered as an alternative to other paid-for hypervisors, like Hyper-V. However, whilst tooling and knowledge exists regarding the migration of VMware hosted workloads to Proxmox, the same cannot be said for Hyper-V.
Consequently, this article explores the procedures required to:
- migrate a Windows guest VM from Hyper-V to Proxmox, and,
- migrate a Windows guest VM from Proxmox back to Hyper-V.
Table of Contents
Baseline Configuration and Assumptions
Hyper-V Host: Hyper-V running on Windows Server 2022 Datacenter (evaluation) SKU with latest patches applied.
Hyper-V Guest: A fresh install of Windows 11 Pro 23H2 with latest patches applied.
Proxmox Host: Proxmox 8.1 with local LVM storage option
Exactly the same method can be used to migrate Windows Server SKUs.
Migrate a Windows VM from Hyper-V to Proxmox
Note source VM configuration
Certain configurations, such as networking, may be changed or lost as a consequence of the migration, so note any statically assigned IP addressing for recreation on the target platform.
Install Virtio Drivers and Qemu Agent
Download the virtio drivers iso and save it in a location accessible from your Hyper-V host (I have a folder for ISOs). In the settings for the Hyper-V guest, mount the ISO as a virtual DVD.
RDP in to your Hyper-V guest. Browse to the virtio drivers DVD and install the following:
virtio-win-guest-tools.exe
This package installs:
- Paravirtualised (virtio) device drivers, required to access virtualised hardware provided by Proxmox.
- Qemu guest agent, to enable richer Proxmox hypervisor integration features such as memory ballooning and graceful shutdown.
- Spice extensions, to provider a richer client experience.
Windows guests require at least the paravirtualised storage drivers, and will blue-screen at boot without on Proxmox due to their virtual disks being undetectable.
Once installed, shut the guest down in preparation for its move.
Move the virtual disk to Proxmox
Next, move the vhd and / or vhdx files for your guest VM to your Proxmox server. The easist way to do this is to use scp via the command line, or using a tool such as WinSCP.
From a command line on your Hyper-V host, type:
scp d:\path\to\vms\Win11_toMove_c.vhdx root@<ProxmoxServer>:/var/lib/vz/images/Win11_toMove_c.vhdx
replacing <ProxmoxServer> with the hostname or IP address of your Proxmox server.
Note that if your VM has multiple vhd(x) files then you may need to repeat the above and subsequent steps for each vhd(x) file.
Convert the disk to a Proxmox compatible format
Proxmox can import raw, qcow2 and vmdk (VMware) format virtual disks, but not Hyper-V disks. To import, you need to convert your vhd(x) file to a recognised format. Proxmox has tools to perform this conversion.
For conversion to qcow2, in the Proxmox web interface, select the node to host the VM and then 'Shell' (or connect to the node via SSH if you prefer), and type:
qemu-img convert -O qcow2 /var/lib/vz/images/Win11_toMove_c.vhdx /var/lib/vz/images/Win11_toMove_c.qcow2
This command creates a copy of the source vhdx file in qcow2 format suitable for import. However, Proxmox requires a target guest VM be specified during the import process, so you must next create a target guest VM.
Create New Proxmox Guest VM
Create a new VM with the following configuration:
- VMID - note the number allocated as we need it to complete the disk import
- DVD - no media
- Operating System - Windows 11
- No drive required for virtio drivers as they are already present in the source disk.
- Machine type q35
- A OVMF UEFI rather than legacy Bios
- Add an EFI disk (on local storage)
- Add a TPM 2.0 with disk on local storage
- Storage adapter is virtio scsi single but delete the suggested disk (we will instead attach the imported disk in a later step)
- CPU - 2 cores of x86-64-v2-AES
- Memory - 4096 MB
- Network - virtio network adapter
Do not start the machine just yet!
Import the source disk
In the Proxmox web interface, select the node to host the VM and then 'Shell' (or connect via SSH if you prefer). Type:
qm importdisk 123 /var/lib/vz/images/Win11_toMove_c.qcow2 local-lvm
where 123
is the VMID of your guest VM that you noted from the previous step.
Attach the imported disk to the VM
In the Proxmox web interface, browse to your new VM Hardware. You will see an 'Unused disk' - this is the disk you have just imported. Double-click the disk to add it to your VM.
When prompted for bus type, you MUST select SATA; any other option will prevent the VM from booting.
Correct the boot order
Browse to your VM Options and double-click Boot Order. Tick the SATA device you just added to make it bootable, and drag it to an appropriate place in the boot order (like first). I like the order to be DVD, SATA then Net.
Boot your VM
Start your VM and observe the boot process on the console.
The first boot on Proxmox may take a little longer than subsequent boots, and you may see a message that reads 'Getting Devices Ready'. This happens as Windows detects and configures the changes required to support it's new (virtual) hardware.
Once booted, login and:
- recreate network and other custom configuration(s) noted in the first step, and,
- examine Device Manager to validate all virtual devices are recognised and no 'unknown devices' remain, and,
- examine Computer Management | Services to validate the Qemu agent service is present and started, and that it reports IP address information to the Summary section in the Proxmox web interface.
Congratulations - the successful migration is now complete.
Tidy Up
On Proxmox, remove the vhdx and qcow2 files from /var/lib/vz/images/ as these are no longer required.
On Hyper-V, you may wish to remove the source VM if no longer required.
Migrate a Proxmox VM to Hyper-V
Here is how to move a VM back to Hyper-V from Proxmox, effectively reversing the above procedure.
Prepare the VM to be migrated
On Proxmox, shut down the guest VM to be migrated. Note the VMID.
Export the Proxmox VM to a VHDX file
On Proxmox, select the VM to be exported and examine Hardware. Note the details of the 'Hard Disk' to be exported. It will read something like (for example) 'vm-109-disk-1' where:
- 109 is the VMID
- disk-1 is the disk to be exported
Note the above details as they describe the disk to be exported.
On Proxmox, in node Shell or via SSH, run:
qemu-img convert -O vhdx /dev/pve/vm-109-disk-1 /var/lib/vz/images/moveToHv.vhdx
replacing 109
with the VMID you noted for your VM. This creates an exported vhdx file of your VM ready for use in Hyper-V.
Note that:
- if your VM has multiple disks then you may need to repeat the above process for each disk, but,
- disks associated with UEFI and TPM do not need to be exported.
Move to Hyper-V
From the command line on your Hyper-V server (or using a tool like WinSCP), copy the exported file from Proxmox to your Hyper-V server:
scp root@<ProxmoxServer>:/var/lib/vz/images/moveToHv.vhdx d:\path\to\my\vm
replacing <ProxmoxServer> with the hostname or IP address of your Proxmox server.
Create a new VM in Hyper-V and attach the copied disk
Create a new Gen2 VM in Hyper-V. Attach the copied disk when prompted. Don't change the disk bus setting (despite it attaching as SCSI not SATA).
The VM will subsequently boot and work on Hyper-V without further configuration.
Tidy Up
On Proxmox, remove the vhdx file from /var/lib/vz/images/ as it is no longer required.
You may wish to remove the source VM if no longer required.
Alternative Tooling Options
Starwind offer a free p2v and v2v converter that can attach to Hyper-V and create qcow2 images, although it cannot migrate directly into Proxmox.
This is also a good option for p2v migrations as the host upon which the software is run can be captured to an image.
Update, February 2025
Since publishing this article I've received some fantastic and very welcomed feedback. Proxmox is a fantastic product and it is my pleasure to contribute in some tiny way to our great community of users.