As an instructor, I like to have all the material referenced in a class manual available as I prepare for the class and/or when I am preparing to implement the “stuff” we cover in class. So I have gone through the manual and created a page with all the online resources, papers or articles mentioned in the VMware NSX: Install, Configure and Manage [6.4] course in one place.
Parking this here for later reference in class.
Today at VMworld Europe, VMware announced vSphere 6.5. This highly anticipated release promises to deliver on several key features and components that have been in the works for some time. Among the anticipated features are native backup and restore of the vCenter 6.5 appliance, the HTML 5 vSphere Client, native HA for the vCenter 6.5 appliance, Update Manager integrated with the vCenter 6.5 appliance. This release also brings VMotion encryption, VM disk level encryption, vSphere integrated Containers, major enhancements to DRS, dramatic improvements to API and automation capabilities with enhancements to REST APIs and CLI’s as well as improvements to the logs and integrated GUI for Auto Deploy, to name a just a few.
For more information on the vSphere 6.5 announcements visit VMware’s vSphere Blog posts:
The vSphere 6.x product page at VMware.com also highlights the product versions and licensing:
With VMworld Europe 2016 underway, expect more announcements from VMware.
End of general support for vCNS is less than 90 days away…
This week I am back in the classroom teaching a vSphere 5.5: Install, Configure and Manage class for VMware in Sacramento, CA. During the first few sections of the class, the ESXi user interfaces and basic configuration tasks are presented, including an overview of the tasks that can be accomplished with DCUI (Direct Console User Interface). The topic of lockdown mode is mentioned as well as how to configure an ESXI host to use Active Directory for user authentication and a little advice on user account best practices. As part of the discussion, I bring up the use of an “ESX Admins” group in Active Directory, the treatment of the Root user password as an “in case of emergency” item to be tightly controlled and the use of lockdown mode.
Today when I was leaving class, I was happy to see a new blog post from Kyle Gleed of VMware entitled: “Restricting Access to the ESXi Host Console – Revisiting Lockdown Mode” and in particular his 5 step recommendation on restricting access to ESXi with version 5.1 or later:
1. Add your ESXi hosts to Active Directory. This not only allows users to use their existing active directory accounts to manage their ESXi hosts, but it eliminates the need to create and maintain local user accounts on each host.
2. Create the “ESX Admins” Group in Active Directory and add all your admins as members to this group. By default, when an ESXi hosts is added to active directory the “ESX Admins” group is assigned full admin privileges. Note that you can change the name of the group and customize the privileges (follow the link for information on how to do this).
3. Vault the “root” password. As I noted above, root is still able to override lockdown mode so you want to limit access to this account. With ESXi versions 5.1 and beyond you can now assign full admin rights to named users so it’s no longer necessary to use the root account for day-to-day administration. Don’t disable the root account, set a complex password and lock it away in a safe so you can access it if you ever need to.
4. Set a timeout for both the ESXiShellTimeOut and the ESXiShellInteractiveTimeOut. Should you ever need to temporarily enable access the ESXi Shell via SSH it’s good to set these timeouts so these services will automatically get shutdown and idle SSH/Shell sessions terminated.
5. Enable Lockdown Mode. Enabling lockdown mode prevents non-root users from logging onto the host console directly. This forces admins to manage the host through vCenter Server. Again, should a host ever become isolated from vCenter Server you can retrieve the root password and login as root to override the lockdown mode. Again, be sure not to disable the root user . The point is not to disable root access, but rather to avoid having admins use it for their day-to-day activities.
Terrific advice and I appreciate the timing, I will definitely refer to this in class this week and in the future!
Recently I have been working with customers on designs for new vSphere 5.1 installs and upgrades. As part of the design, I have been specifying the installation and configuration of the vSphere ESXi Dump Collector service on their Windows vCenter Server. The ESXi dump collector service allows the collection of the diagnostic dump information generated when an ESXi host has a critical fault and generates a “purple diagnostic screen.”
This post is a walk through of installing and configuring the ESXi Dump Collector service on vCenter and configuring an ESXi host to use it.
The Windows Server 2008 R2 VMs I use for vCenter are configured with additional drives for installing applications and storing data. In this example from my virtual lab, I have a “d:\” drive for applications and data.
Install the vSphere ESXi Dump Collector
The installer for the dump collector in included on the vCenter installer ISO image. I mount the ISO image to the Windows 2008 R2 VM where I have installed vCenter server.
Launch “autorun.exe” as an administrator.
After the installer starts, select “English” as the language.
On the Welcome… page, click “Next >.”
On the End User Patent Agreement page, click “Next >.”
On the End User License Agreement page, select “I accept…”; click “Next >.”
On the Destination Folder page, click the “Change…” button beside “vSphere ESXi Dump Collector repository directory:”
On the Change Current Destination Folder page, change the “Folder name:” value to “d:\…”. Click “OK.”
Back on the Destination Folder page, observe that the path has been updated and click “Next >”
On the Setup Type page, select “VMware vCenter Server installation”, then click “Next >.”
On the VMware vCenter Server Information page, enter the appropriate information for connecting to vCenter. Click “Next >” to continue.
If you are using the default self-signed SSL certificate for vCenter, you will receive a message with the SHA1 thumbprint value for the vCenter server’s certificate. Click “Yes” to trust that the certificate for connecting to the vCenter server.
You can verify the thumbprint by looking at the certificate properties on your vCenter server. Notice that the thumbprint from the installer matches the thumbprint on the vCenter server’s certificate.
On the vSphere ESXi Dump Collector Port Settings page, click “Next >” to accept the default value of UDP port 6500.
On the vSphere ESXi Dump Collector Identification page, select the FQDN of the vCenter server and click “Next >.”
On the Ready to Install page, click “Install.”
After the installer has completed, click “Finish” on the Installer Completed page.
You can view the configured settings with the vSphere Client by selecting VMware ESXi Dump Collector from the Administration page.
You can also view the configuration with the vSphere Web Client by selecting the vCenter server, then browsing to the “Manage” tab and selecting “ESXi Dump Collector” under “Settings.”
Configuring an ESXi host to transmit the core dump over the network to the dump collector service.
Now that we have installed the dump collector service, we need to configure the ESXi hosts to send their diagnostic dump files to the vCenter server.
I set this up through the ESXi console. You will notice that I am logged in a “root” as I had not configured the ESXi host to use Active Directory authentication yet. Any user account that has the “administrator” role on the ESXi host can configure these settings.
First, checked the current coredump network configuration:
~ # esxcli system coredump network get
Network Server IP:
Network Server Port: 0
Next, I confirmed the name of the vmkernel connection I planned to use: “vmk0” with the old “esxcfg-vmknic -l” command
Then, I configured the system to send coredumps over the network through the “vmk0” vmkernel port to my vCenter server’s IPv4 address at port 6500:
~ # esxcli system coredump network set –interface-name vmk0 –server-ipv4 10.0.0.51 –server-port 6500
You have to enter the interface name and server IPv4 address. The port is optional if you are using the default of 6500.
Then, I enabled the ESXi host to use the dump collector service:
~ # esxcli system coredump network set –enable true
I verified that the settings were correctly configured:
~ # esxcli system coredump network get
Host VNic: vmk0
Network Server IP: 10.0.0.51
Network Server Port: 6500
I checked to see if the server was running:
~ # esxcli system coredump network check
Verified the configured netdump server is running
Here is a screenshot of the process:
FYI, by default, the diagnostic dump file (core dump) is stored on a local disk partition of the ESXi host. You can find the local partition from the local ESXi console (if it is enabled) with the following command:
# esxcli system coredump partition get
I have highlighted the command in the figure below:
More information about managing the ESXi core dump disk partition is in the online documentation here.
Now that the vCenter server has the dump collector service installed and the ESXi host is configured to use it, I had to try it out!
Using the vsish tool and specific setting that Eric Sloof or NTPRO.NL described in his post “Lets create some Kernel Panic using vsish“, I crashed the ESXi host. As you can see in the screenshots, I was rewarded with a purple screen and success with transmitting the dump over the network to my vCenter server!
Here is the coredump file that was transmitted. Success!
For more information check out these KB articles:
I finally spent a little time and have updated the vSphere page to vSphere 5.1. I intend to build on the Knowledgebase article section as items come up with customers or from the classroom.