#VMware #vSphere 8.0: To HotAdd vCPUs or Not To HotAdd vCPUs?

<tldr;> Knock Yourself Out </tldr;>

One of the toughest configuration decisions that vSphere Administrators have had to make over the last decade is whether or not to enable CPU HotAdd on VMs hosting resource-intensive Business Critical Applications, especially if the Application is NUMA-aware and can benefit from NUMA Topology notifications.

I talked extensively about this in this blog post (https://blogs.vmware.com/apps/2021/06/cpu-hotadd-for-windows-vms-how-badly-do-you-want-it.html).

Give it a read. I wrote it myself, and (ahem) my bonus depends on clicks (PS: It doesn't. Go read it anyway. For the Culture).

So, where was I? Yes, applications such as the #Microsoft #SQLServer benefit from being able to "see" the underlying resources NUMA topology and SQL server and vSphere Administrators can improve resource utilization efficiency by right-sizing large VMs and scaling up

resources dynamically without interruptions. Unfortunately, the features that provide this state of peaceful coexistence (auto-VNUMA enabled beyond 8vCPUs and CPU HotAdd) could not coexist on a VM in vSphere.

Until now. In vSphere 8.0, the following advanced configuration setting will allow you to enable CPU HotAdd on a VM without disabling its ability to become NUMA-aware after set threshold:

numa.allowHotadd

With that key set to TRUE, a VM will be able to support

CPU HotAdd and still be able to benefit from NUMA awareness.

The "Phantom Node" issue I reported in my blog post has also been rectified on both Vendors'side. Allocated memory are now evenly divided among all NUMA nodes in the Guest OS. This is regardless of whether the topology was auto-configured or manually created.

I have attached some screenshots of this new behavior in #Windows #Server 2022.