New Azure Local Cluster - Live Migration Fails 0x8009030E

Ian Maddox 5 Reputation points
2025-03-13T15:34:39.3366667+00:00

I have set up a new Azure Local Cluster

Live Migration from Node1 to Node 2 (i.e. within the cluster) fails with error 0x8009030E.

Non-Live Migration works ok.

Azure Local
{count} vote

2 answers

Sort by: Most helpful
  1. Pramidha Yathipathi 1,135 Reputation points Microsoft External Staff Moderator
    2025-05-16T20:23:30.61+00:00

    Hi Ian Maddox,

    Apologies for the delay in response. May I know if the issue has been resolved or if you're still experiencing the same problem you can follow the below steps.

    Check Live Migration Settings on both source and destination nodes:

    Open Hyper-V Manager

    Go to Hyper-V Settings > Live Migrations

    Ensure enable incoming and outgoing live migrations.

    Authentication protocol is set to Kerberos.

    Performance options use TCP/IP only (or confirm that SMB isn't misconfigured).

    Specify the right IP address range if you're using static IPs for live migration traffic.

    Verify kerberos delegation ensure that both nodes are properly configured in Active Directory:

    In AD Users and Computers, go to each Hyper-V node's computer account.

    Go to Properties → Delegation tab.

    Choose: Trust this computer for delegation to specified services only

    Then: Use Kerberos only

    Add cifs and Microsoft Virtual System Migration Service (service type: Microsoft Virtual System Migration Service, service name: Hyper-V node FQDN).

    Without proper delegation, Kerberos auth might fail silently, causing the migration to hang.

    Confirm firewall ports ensure that required ports for Live Migration are open between nodes (e.g. via Windows Defender Firewall):

    TCP 6600 (default for LM)

    TCP 135

    TCP 445

    Ephemeral ports (TCP 49152–65535, depending on OS)

    ICMP (for connectivity testing)

    Use dedicated network for live migration if not already; create a dedicated network interface or vSwitch just for Live Migration traffic.

    Set Live Migration network priority in Failover Cluster Manager.

    Check resource bottlenecks live migration can stall at 48% if: Network is saturated, Host memory is under pressure.

    On source and destination, monitor; Memory usage (Task Manager or Resource Monitor), NIC throughput, CPU pressure.

    Use PowerShell for Better Errors try running this PowerShell with verbose output:

    Move-VM -Name "your-vm-name" -DestinationHost "node2-fqdn" -IncludeStorage -Verbose
    

    Ensure Cluster-Aware Updating & Integration Services and make sure the VM’s Integration Services are up to date.

    Verify all cluster nodes are fully patched and running the same Hyper-V version.

    Please refer the below document:

    https://learn.microsoft.com/en-us/windows-server/virtualization/hyper-v/deploy/set-up-hosts-for-live-migration-without-failover-clustering

    https://learn.microsoft.com/en-us/troubleshoot/windows-server/virtualization/troubleshoot-live-migration-issues

    https://techcommunity.microsoft.com/blog/coreinfrastructureandsecurityblog/why-hyper-v-live-migrations-fail-with-0x8009030e/2238446

    If you found information helpful, please click "Upvote" on the post to let us know.

    If the issue still persist, feel free to ask us we are happy to assist you further if any.

    Thank You.


  2. Ian Maddox 5 Reputation points
    2025-08-20T10:05:36.1366667+00:00

    I think I have now resolved this - it is a known issue to do with Live Migrations of vm's that have Dynamic Memory on Azure Local. Fixed by adding registry value SkipSmallLocalAllocations - see GitHub post for specifics

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.