Table of Contents
Overview
VMware Cloud Foundation (VCF) is a suite of products that you are likely already familiar with if you have used VMware products in the past. VCF is not a product in of itself and does not require use of a public cloud; it allows you to build and maintain your own private cloud.
The current General Availability (GA) release includes at the core the following products:
- VMware Aria Suite
- VMware vCenter Server Appliance
- VMware ESXi
- VMware vSAN
- VMware NSX
- VMware HCX
There are additional software licensing that can be obtained such as VMware Avi Load Balancer and VMware vDefend Firewall.
This post will first start out by upgrading the infrastructure up to vSphere 8. Then I will convert the cluster to a VMware Cloud Foundation (VCF) management workload domain (WLD).
Lab Configuration
For this post, I am using VMware vCenter Server Appliance 7.0 Update 3p (22837322) and VMware ESXi 7.0 Update 3n (21930508). The versions of 7 should not matter, however.


Networking
The three nodes are connected via a vSphere Distributed Switch (VDS). I have separated out host management from virtual machine managers.

The ESXi hosts are connected to the Host Management distributed port group with a Static binding configuration. vCenter is connected to the VM Management distributed port group with an Ephemeral – no binding configuration.
An ephemeral port group is required for the vCenter upgrade (ref: https://knowledge.broadcom.com/external/article/324492/static-nonephemeral-or-ephemeral-port-bi.html)

The distributed switch is configured with jumbo frames since I am primarily using iSCSI storage.

Storage
For storage, I am using the iSCSI Software Adapter included with VMware vSphere.

The storage device, the datastore, is iSCSI backed and configured with Multiple Path Input/Output (MPIO). The Multipathing Policy is configured with Round Robin.

Upgrade vCenter
The first step in this process is to upgrade vSphere. I am going to focus on upgrading to the Bill of Materials (BOM) versions.
- VMware vCenter Server Appliance – 8.0 Update 3c (24305161)
- VMware ESXi – 8.0 Update 3b (24280767)
Newer versions of the VCF Import Tool can convert vSphere with newer versions. We start with the vCenter Server, then move on to upgrading the ESXi hosts.
Note: Before deploying a new vCenter Server appliance, configure vSphere DRS from Fully Automated to either Manual or Partially Automated.

Stage 1: Deploy New vCenter Server
Mount and start the installer.



On the next screen, you will be asked to provide the address for the current vCenter Server to be upgraded.

The next screen will ask for SSO information from the current vCenter as well as the appliance root password. It will also ask for the host information that hosts the current appliance.

Verify and accept the SHA1 thumbprints of the certificates.

Provide identity information for the ESXi host to deploy the new appliance to.

Accept the Certificate Warning.

Provide a VM name and root password for the new vCenter Server appliance.

Choose a Deployment size.

Select a storage location.

Configure the network settings for the new (replacement) vCenter Server appliance. This will require a temporary IP address. If you are unable to find the network port group you are expecting, ensure the Distributed Port Group that vCenter uses is configured as an Ephemeral – no binding port group. (ref: https://knowledge.broadcom.com/external/article/324492/static-nonephemeral-or-ephemeral-port-bi.html)

Review the configured settings and go back, if necessary, to correct.

*plays hold music 🎶 *

* grabs a beverage 🧋*

At this point, Stage 1: Deploy vCenter Server is complete.
Stage 2: Upgrade Source vCenter Server
After clicking on the Continue button, you are brought to the Introduction page for Stage 2.

There will be a transition window that checks Pre-upgrade elements. It uses the information from Stage 1, where you populated the Source vCenter and the ESXi host information. You will have to wait for it to complete before proceeding.

A Pre-Upgrade check result window will appear with any Warnings and corresponding Resolutions.

Since these are all benign Warnings for me, I will proceed by clicking on the Close button. Obviously, if you receive actionable Warnings, you will likely have to resolve them before proceeding. This will bypass the Connect to source vCenter Server task and move on to the next element.
The next page will request what to copy from the source vCenter Server Server. Make smart choices, here! Depending on how much information there is, the age of the appliance, and maybe if some of this information is already being shipped off to other destinations should be considered. For this post, I am going to choose just the Configuration and Inventory.


Make sure there is a good backup of vCenter, not just a snapshot. See below if you need to configure a backup for vCenter.


The data transfer can take quite awhile. This is a good time to take a break.

* takes a nap 💤 *

* went to workout 🏋🏻 *

* watched my children grow up 🧑🏼👨🏻🎓 *

Finally after quite a long time, Stage 2 is complete. You can see that the URL is what the source vCenter Server was.
After I log into vCenter Server, I can verify that the vCenter version has been upgraded.

When I look at the inventory, I see that I also need to delete the old vCenter and rename the temporary appliance back to the original vCenter name.


Once this is complete, it should be safe to return the Distributed Resource Scheduler (DRS) back to a Fully Automated configuration.

Next up, we will upgrade the ESXi hosts.
Upgrade ESXi
There are two case for upgrading ESXi hosts. One is where the vCenter is connected to the internet and an image can be built by downloading the components directly and the other is where the vCenter is disconnected from the internet and it takes a little more work to build the image. In either case, vSphere Lifecycle Manager is going to be used and images are going to be configured against the cluster.
To start the process, select the cluster in the inventory. At the top, select Updates.

Unless images were already in use, you should be on Baselines. Select Image. Notice the screen tells you that the ESXi hosts are running some version of ESXi 7.0. The next click, selecting Setup Image Manually is where the process deviates for either a disconnected vCenter or a connected vCenter. Please choose the following process accordingly.

Upgrading ESXi Hosts – Disconnected vCenter
If you try to set up the image manually on a disconnected vCenter, you will find that there are no ESXi versions available.

You can directly download an ESXi ISO from Broadcom’s support portal or if there is another vCenter in the organization that is connected, an image cluster can be built and used to build an image. That image can then be exported and transported by sneaker net to the disconnected systems. Unfortunately, there are an uncanny amount of steps here and I will not be able to discuss all of them.
The short story is to get one of the hosts in the cluster upgraded interactively to the destination version. Put the host into maintenance mode and let DRS move all the workloads automatically or manually move all of the workloads off of it. Once it’s in maintenance mode, reboot the host and if it’s a Dell host, use the iDRAC, if it’s a Cisco host, use the UCSM, and on HPE, use the iLO.
Starting an Interactive upgrade.
Press Enter to continue.

Press F11 to continue.

Choose the disk to upgrade.

If there is more than one disk available, be sure to press F1 to view the Details of the disks. Below is a screenshot showing that ESXi was found. This indicates that it is likely the OS disk and should be used for the upgrade.

When performing the upgrade, be sure to choose the option to Upgrade or Preserve the VMFS datastore. One of these options will be present depending on the versions being upgraded.

You may (most likely will) encounter a warning about the CPU not being supported in future versions. The other option is that you encounter an unsupported CPU like what is here in my lab. Be sure to check the hosts against the Broadcom Hardware Compatibility Guide to ensure correct versions can be supported.

Asking if I want to force an unsupported installation.

Finally, the last step before the actual upgrade takes place.

When the upgrade is complete, reboot the host and wait for it to reappear in vCenter as a connected host.

Now that the ESXi host has been updated and show as connected in Maintenance mode in the cluster, select the cluster and get back to vSphere Lifecycle Manager. It will probably appear that nothing has changed. Select Check again.

Once the cluster checks the hosts again, it should find the upgraded ESXi version.

Select the updated version checkbox and select Proceed with this Image.
Skip the next section to view how to apply the image to the cluster.
Upgrading ESXi Hosts – Connected vCenter
If vCenter is connected to the internet and vSphere Lifecycle Manager has been able to perform a sync operation, the list of available ESXi versions should be populated.

Additionally, the Vendor Addon options should also be populated.

Choose the appropriate ESXi Version and Vendor Addon. Choose Save.

While the cluster is checking for image compliance, select Finish Image Setup.

There will be a pop-up informing you that finishing image setup will replace baselines. Select Yes, Finish Image Setup.

Go to the next section to view how to apply the image to the cluster.
Applying the Image to the Cluster
If you made a mistake or need to change the version, you can edit the image before proceeding. I am going to assume you are ready to apply the image to the cluster, however.
Navigate back to the vSphere Lifecycle Manager for the cluster if you have navigated away. Notice this time, you should be on the Image menu.

Now that an image has been built and compliance against the cluster has been checked for compliance, you only have to select Remediate All. vSphere Lifecycle Manager will handle the rest. Workloads will be migrated off, assuming DRS is configured, the host will be placed into Maintenance Mode, updated, rebooted, and then be taken out of Maintenance Mode automatically. You can even check for any compliance issues prior to the upgrade.
My hosts are getting old and unsupported, but still work for testing and lab purposes, like writing this post. You should not see the warning regarding unsupported CPU for your upgrade.

Select Remediate All. Review the Impact Summary of the remediation. If all is good, select Start Remediation.

The cluster will continue to be upgraded. At the end, there will be an Image Compliance message.

Once all the hosts in the cluster have been upgraded, you can move on to preparing the environment for a VCF Conversion.
VCF Conversion
I am not going to go off script here. The documentation for this has step by step direction. I will add notes and screenshots where needed.
Copy the VCF Import Tool to the Target vCenter Appliance

Run a Precheck on the Target vCenter Before Conversion
python3 vcf_brownfield.py precheck --vcenter '<my-vcenter-address>' --sso-user '<my-sso-username>' --sso-password '<my-sso-password>'

We can see here that there was a failure. (Note: this was an intentional failure to show a possible fail status. In this case, even though the hosts are connected to a vSphere Distributed Switch, I had them connected to an Ephemeral – no binding port group. Only VM Management virtual machines, such as vCenter and SDDC Manager, should be connected to an ephemeral port group. I moved all the host networking to a Static binding and re-ran the precheck.)
Work through any failures and re-run the precheck until all the checks pass.

Be sure to remove the VCF Import Tool from the vCenter Server after all the checks succeed.
rm --recursive /tmp/vcfimport/
Deploy the SDDC Manager Appliance on the Target vCenter
Locating this on the Broadcom Support portal is a little tricky. I will post a screenshot for locating it.
My Downloads > VMware Cloud Foundation > VMware Cloud Foundation 5.2 > 5.2.1 > Drivers & Tools

Make sure to add a forward and reverse DNS entry for SDDC Manager prior to deploying the appliance.
Deploy the downloaded OVA to the vCenter Server.



I suggest not selecting the Automatically power on deployed VM checkbox so that a snapshot can be made prior to startup.



Choose the appropriate datastore.

Choose a VM Management network. This network should have an Ephemeral – no binding configuration.

Fill in the appropriate information as requested.




Review all the settings and go back if needed. Sometimes, while gathering information to fill in the forms will timeout the operation. Simply restart the deployment and fill in the information again.

When the deployment is successful, be sure to take a snapshot!

Power on the virtual machine.
Generate an NSX Deployment Specification
Deploying NSX will require four IP addresses. One for the VIP and three for the nodes. The configured network should be on the VM Management port group which is configured as Ephemeral – no binding. The deployment specification is what configures the NSX cluster.
NSX deployment requires a minimum of 3 hosts.
{
"license_key": "AAAAA-BBBBB-CCCCC-DDDDD-EEEEE",
"form_factor": "medium",
"admin_password": "****************",
"install_bundle_path": "/nfs/vmware/vcf/nfs-mount/bundle/bundle-133764.zip",
"cluster_ip": "172.16.11.71",
"cluster_fqdn": "sfo-m01-nsx01.sfo.rainpole.io",
"manager_specs": [{
"fqdn": "sfo-m01-nsx01a.sfo.rainpole.io",
"name": "sfo-m01-nsx01a",
"ip_address": "172.16.11.72",
"gateway": "172.16.11.1",
"subnet_mask": "255.255.255.0"
},
{
"fqdn": "sfo-m01-nsx01b.sfo.rainpole.io",
"name": "sfo-m01-nsx01b",
"ip_address": "172.16.11.73",
"gateway": "172.16.11.1",
"subnet_mask": "255.255.255.0"
},
{
"fqdn": "sfo-m01-nsx01c.sfo.rainpole.io",
"name": "sfo-m01-nsx01c",
"ip_address": "172.16.11.74",
"gateway": "172.16.11.1",
"subnet_mask": "255.255.255.0"
}]
}
Upload the Required Software to the SDDC Manager Appliance
Download the NSX deployment bundle (install bundle) from Broadcom Support Portal.

Upload the following to SDDC Manager:
- VCF Import Tool 5.2.1.2 (Do not use version 5.2.1.1) to
/home/vcf/vcf-import-package
- NSX deployment bundle to
/nfs/vmware/vcf/nfs-mount/bundle/
- Copy the completed NSX Deployment Specification (the JSON file) to
/home/vcf/

Navigate to the /home/vcf/vcf-import-package
directory, switch to the root
account using su
, and run the install.sh
script.
Switch back to the vcf
account and test the installation.
python3 /home/vcf/vcf-import-package/vcf-brownfield-import-5.2.1.2-24494579/vcf-brownfield-toolset/vcf_brownfield.py --help

Now is a good time to make a snapshot of the SDDC Manager appliance.
Run a Detailed Check on the Target vCenter
As the vcf
user, run the following command from the SDDC Manager targeting the vCenter to convert.
python3 vcf_brownfield.py check --vcenter '<my-vcenter-address>' --sso-user '<my-sso-username>'
After running the script, it will prompt for passwords. Supply the appropriate passwords when requested. There will also be an SHA256 thumbprint verification.

There will likely be some errors to work through. Open up the log file and do a search for VALIDATION_FAILED. The log files location are provided in the output from the check.

See the Additional Items section below for a few errors and resolutions I ran into.
Keep resolving and re-running the check until you receive the following line:
Total checks: 84, Successful checks: 84, Failed checks: 0, Internal errors: 0

Convert or Import the vSphere Environment into the SDDC Manager Inventory
To perform a convert:
python3 vcf_brownfield.py convert --vcenter '<vcenter-fqdn>' --sso-user '<sso-user>' --domain-name '<wld-domain-name>' --nsx-deployment-spec-path '<nsx-deployment-json-spec-path>'
My lab is slow; this took almost two hours and that was modifying the deployment to only one NSX node. See the Additional Items section if you are testing in a lab with limited resources.

<To be continued…>
Additional Items
vCenter Server Backup
On the vCenter to be upgraded (the source vCenter), log into the VMware Appliance Management Interface (VAMI) with the root
account.
https://fqdn:5480
Choose Backup.

If there are no backup settings configured, choose Configure.

Configure the appropriate information for the organization.

To perform a backup, choose Backup Now.

The Backup Now window will pop-up. Select the checkbox to use the settings configured earlier. You will still need to provide passwords.

Verify the backup activity.

ESXi upgrade policy validation across vCenter and SDDC Manager
We need to adjust the vSphere Lifecycle Manager settings accordingly. Navigate to the vSphere Lifecycle Manager.

Select Settings, then Images under Cluster lifecycle.

Adjust the VM migration policy according to this https://techdocs.broadcom.com/us/en/vmware-cis/vcf/vcf-5-2-and-earlier/5-2/esx-upgrade-policy-guardrail-failure.html.

If you are performing a VCF Import check on the SDDC Manager, re-run the check.
Cluster common datastore check
Description: Validates whether the common datastore for all hosts is VSAN skyline health check compatible or with NFS3 version or FC storage.
No common datastore (vSAN, NFS, or FC) matches import criteria.
This message appears in this case because the hosts are using only iSCSI storage. This is not supported by the VCF Import Tool at this time.
If able, attach an NFS datastore to the hosts in the cluster and rerun the check.

VCF Import Tool Cleanup Tasks
This list is probably not complete. These are the items I have found that are leftover or need to be accomplished to revert back from a conversion.
- Remove anti-affinity rule for NSX Managers
- Remove NSX from hosts from NSX Manager
- Remove the NSX Manager virtual machines
- Remove the NSX Manager extension from vCenter using https://fqdn/mob > content > extensionManager
- Move the vCenter Server out of the Resource Pool
- (Optional) If an SDDC Manager snapshot was taken and an action will be performed again, move the SDDC Manager out of the Resource Pool
- Remove the Resource Pool
- Remove the SDDC Manager extension from vCenter using https://fqdn/mob > content > extensionManager
Single Node NSX Manager Modification for Testing in a Lab
My lab is fairly robust but my storage is not the quickest. I still want to test out the VCF Import Tool in my lab and I think the slowness in my lab storage is causing the NSX deployment workflow to fail.
- Log into the SDDC Manager appliance with
vcf
account. - Switch to
root
account withsu -
. - Add the following to this file:
/etc/vmware/vcf/domainmanager/application-prod.properties
nsxt.manager.cluster.size=1
nsxt.manager.wait.minutes=120
systemctl restart domainmanager.service
The NSX Deployment Specification still must contain information for the VIP and four nodes.
Leave a Reply