[How-To] Fix NSX to Re-Import ESX into VCF
If you stumble upon this blog chances are you are working on VMware Cloud Foundation 9.x. The new standard platform for Virtualization in the Enterprise introduces quite a lot of changes if you come from basic vSphere. We’ve played around with the brownfield import mechanism for quite some time and stumbled upon some caveats.
The issue: NSX fails for Brownfield Import
We’re trying to import/onboard/brownfield an existing vCenter server into VCF 9.0.x. This fails at the NSX Part. In our use-case we selected to not have an existing NSX installation in place. The brownfield import will therefore build up a new NSX instance for us.
If your are importing vSphere 8.x then you will have to deploy three NSX Manager (plus its VIP). If you are importing vSphere 9.x you can choose to go with a simple deployment of one NSX Manager (plus its VIP).
In our use-case we had previously imported the vCenter and ESX hosts already. We changed some things and wanted to do it again and see if it still worked. It didn’t.
The ESX Hosts were unable to connect to the new NSX Manager(s).
Solution A: vSphere 8.x Hosts
If you are on vSphere 8.x the solution is quite simple. We simply need to “delete” nsx from the ESX hosts. To do this we connect to the host using SSH.
On the ESX host go into the nsxcli by issuing “nsxcli”. Here just type the command “del nsx” and hit enter. Answer yes a couple of times and afterwards wait for a couple of minutes while the NSX VIBs are removed. Then reboot the host.

Solution B: vSphere 9.x Hosts
After playing around some more we decided to also fully test out the brownfield import using a 9.x version of vSphere.
Even though we followed Solution A prior to trying to import we still failed at the NSX part (again). With vSphere 9.x the NSX VIBs on the hosts come pre-installed. Therefore the “del nsx” command actually did not even remove those vibs. So we had to dig some more to get this working again.
This time we need to do a bunch of things on the ESX host in order for him to recognize the nsx manager again.
SSH to the Host. Now, move this xml as a backup: “mv /etc/vmware/nsx/appliance-info.xml /tmp/old_info.xml”. Then we null out two certificate files by issuing first “cat /dev/null > /etc/vmware/nsx/host-cert.pem” and then “cat /dev/null > /etc/vmware/nsx/host-privkey.pem”. Afterwards we just need to restart some services: “/etc/init.d/nsx-proxy restart”, “/etc/init.d/nsx-opsagent restart”, “/etc/init.d/nsx_cfgagent restart” and “/etc/init.d/nsx-nestdb restart”.

Last step for both solutions: fix NSX
If you think that after fixing this, the “Restart Task” in VCF would automatically trigger the NSX Manager to try and redeploy the Nodes then you are mistaken. You need to login to the NSX Manager go to System – Fabric – Nodes and trigger the reinstall on the Hosts manually. Only trigger a “Restart Task” in VCF after you have green checkmarks on all of your ESX hosts.
There are actually two tasks triggered in VCF when brownfielding a WLD. However you only need to restart the one with multiple subtasks in order for the brownfield to succeed.
Following this will surely fix NSX Brownfield Import for you. At least this specific issue.
Next Steps
If you’ve read this far then chances are you are still having issues. Feel free to reach out to us. We’re happy to help out!
Leave a Reply