Web resource links for VMware NSX: Install, Configure and Manage [6.4] class

As an instructor, I like to have all the material referenced in a class manual available as I prepare for the class and/or when I am preparing to implement the “stuff” we cover in class. So I have gone through the manual and created a page with all the online resources, papers or articles mentioned in the VMware NSX: Install, Configure and Manage [6.4] course in one place.

VMware announces vSphere 6.5!

Today at VMworld Europe, VMware announced vSphere 6.5. This highly anticipated release promises to deliver on several key features and components that have been in the works for some time. Among the anticipated features are native backup and restore of the vCenter 6.5 appliance, the HTML 5 vSphere Client, native HA for the vCenter 6.5 appliance, Update Manager integrated with the vCenter 6.5 appliance. This release also brings VMotion encryption, VM disk level encryption, vSphere integrated Containers, major enhancements to DRS, dramatic improvements to API and automation capabilities with enhancements to REST APIs and CLI’s as well as improvements to the logs and integrated GUI for Auto Deploy, to name a just a few.
For more information on the vSphere 6.5 announcements visit VMware’s vSphere Blog posts:

Introducing vSphere 6.5
What’s New in vSphere 6.5: vCenter Server
What’s new in vSphere 6.5: Security
What’s New in vSphere 6.5: Host & Resource Management and Operations

The vSphere 6.x product page at VMware.com also highlights the product versions and licensing:

vSphere and vSphere with Operations Management

With VMworld Europe 2016 underway, expect more announcements from VMware.

New VMware vSphere Blog post on ESXi console lockdown

This week I am back in the classroom teaching a vSphere 5.5: Install, Configure and Manage class for VMware in Sacramento, CA. During the first few sections of the class, the ESXi user interfaces and basic configuration tasks are presented, including an overview of the tasks that can be accomplished with DCUI (Direct Console User Interface). The topic of lockdown mode is mentioned as well as how to configure an ESXI host to use Active Directory for user authentication and a little advice on user account best practices. As part of the discussion, I bring up the use of an “ESX Admins” group in Active Directory, the treatment of the Root user password as an “in case of emergency” item to be tightly controlled and the use of lockdown mode.

Today when I was leaving class, I was happy to see a new blog post from Kyle Gleed of VMware entitled: “Restricting Access to the ESXi Host Console – Revisiting Lockdown Mode” and in particular his 5 step recommendation on restricting access to ESXi with version 5.1 or later:

1. Add your ESXi hosts to Active Directory. This not only allows users to use their existing active directory accounts to manage their ESXi hosts, but it eliminates the need to create and maintain local user accounts on each host.

2. Create the “ESX Admins” Group in Active Directory and add all your admins as members to this group. By default, when an ESXi hosts is added to active directory the “ESX Admins” group is assigned full admin privileges. Note that you can change the name of the group and customize the privileges (follow the link for information on how to do this).

3. Vault the “root” password. As I noted above, root is still able to override lockdown mode so you want to limit access to this account. With ESXi versions 5.1 and beyond you can now assign full admin rights to named users so it’s no longer necessary to use the root account for day-to-day administration. Don’t disable the root account, set a complex password and lock it away in a safe so you can access it if you ever need to.

4. Set a timeout for both the ESXiShellTimeOut and the ESXiShellInteractiveTimeOut. Should you ever need to temporarily enable access the ESXi Shell via SSH it’s good to set these timeouts so these services will automatically get shutdown and idle SSH/Shell sessions terminated.

5. Enable Lockdown Mode. Enabling lockdown mode prevents non-root users from logging onto the host console directly. This forces admins to manage the host through vCenter Server. Again, should a host ever become isolated from vCenter Server you can retrieve the root password and login as root to override the lockdown mode. Again, be sure not to disable the root user . The point is not to disable root access, but rather to avoid having admins use it for their day-to-day activities.

Terrific advice and I appreciate the timing, I will definitely refer to this in class this week and in the future!

 

Configure vSphere 5.1 for remote debug logging

Recently I have been working with customers on designs for new vSphere 5.1 installs and upgrades. As part of the design, I have been specifying the installation and configuration of the vSphere ESXi Dump Collector service on their Windows vCenter Server. The ESXi dump collector service allows the collection of the diagnostic dump information generated when an ESXi host has a critical fault and generates a “purple diagnostic screen.”

This post is a walk through of installing and configuring the ESXi Dump Collector service on vCenter and configuring an ESXi host to use it.

The Windows Server 2008 R2 VMs I use for vCenter are configured with additional drives for installing applications and storing data. In this example from my virtual lab, I have a “d:\” drive for applications and data.

Install the vSphere ESXi Dump Collector

The installer for the dump collector in included on the vCenter installer ISO image. I mount the ISO image to the Windows 2008 R2 VM where I have installed vCenter server.

Launch “autorun.exe” as an administrator.

From the VMware vCenter Installer, select “VMware vSphere ESXi Dump Collector”. Then click “Install” to begin the installation.

After the installer starts, select “English” as the language.

On the Welcome… page, click “Next >.”

On the End User Patent Agreement page, click “Next >.”
On the End User License Agreement page, select “I accept…”; click “Next >.”
On the Destination Folder page, click the “Change…” button beside “vSphere ESXi Dump Collector repository directory:”
On the Change Current Destination Folder page, change the “Folder name:” value to “d:\…”. Click “OK.”
Back on the Destination Folder page, observe that the path has been updated and click “Next >”

On the Setup Type page, select “VMware vCenter Server installation”, then click “Next >.”

On the VMware vCenter Server Information page, enter the appropriate information for connecting to vCenter. Click “Next >” to continue.

If you are using the default self-signed SSL certificate for vCenter, you will receive a message with the SHA1 thumbprint value for the vCenter server’s certificate.  Click “Yes” to trust that the certificate for connecting to the vCenter server.

You can verify the thumbprint by looking at the certificate properties on your vCenter server.  Notice that the thumbprint from the installer matches the thumbprint on the vCenter server’s certificate.

On the vSphere ESXi Dump Collector Port Settings page, click “Next >” to accept the default value of UDP port 6500.

On the vSphere ESXi Dump Collector Identification page, select the FQDN of the vCenter server and click “Next >.”

On the Ready to Install page, click “Install.”

After the installer has completed, click “Finish” on the Installer Completed page.

You can view the configured settings with the vSphere Client by selecting VMware ESXi Dump Collector from the Administration page.

You can also view the configuration with the vSphere Web Client by selecting the vCenter server, then browsing to the “Manage” tab and selecting “ESXi Dump Collector” under “Settings.”

Configuring an ESXi host to transmit the core dump over the network to the dump collector service.

Now that we have installed the dump collector service, we need to configure the ESXi hosts to send their diagnostic dump files to the vCenter server.
I set this up through the ESXi console. You will notice that I am logged in a “root” as I had not configured the ESXi host to use Active Directory authentication yet. Any user account that has the “administrator” role on the ESXi host can configure these settings.

First, checked the current coredump network configuration:

~ # esxcli system coredump network get
   Enabled: false
Host VNic:
Network Server IP:
Network Server Port: 0

Next, I confirmed the name of the vmkernel connection I planned to use: “vmk0” with the old “esxcfg-vmknic -l” command

Then, I configured the system to send coredumps over the network through the “vmk0” vmkernel port to my vCenter server’s IPv4 address at port 6500:

~ # esxcli system coredump network set –interface-name vmk0 –server-ipv4 10.0.0.51 –server-port 6500

You have to enter the interface name and server IPv4 address. The port is optional if you are using the default of 6500.

Then, I enabled the ESXi host to use the dump collector service:
~ # esxcli system coredump network set –enable true

I verified that the settings were correctly configured:
~ # esxcli system coredump network get
   Enabled: true
Host VNic: vmk0
Network Server IP: 10.0.0.51
Network Server Port: 6500

I checked to see if the server was running:
~ # esxcli system coredump network check
Verified the configured netdump server is running
~ #

Here is a screenshot of the process:

FYI, by default, the diagnostic dump file (core dump) is stored on a local disk partition of the ESXi host. You can find the local partition from the local ESXi console (if it is enabled) with the following command:

# esxcli system coredump partition get

I have highlighted the command in the figure below:

More information about managing the ESXi core dump disk partition is in the online documentation here.

Now that the vCenter server has the dump collector service installed and the ESXi host is configured to use it, I had to try it out!

Using the vsish tool and specific setting that Eric Sloof or NTPRO.NL described in his post “Lets create some Kernel Panic using vsish“, I crashed the ESXi host. As you can see in the screenshots, I was rewarded with a purple screen and success with transmitting the dump over the network to my vCenter server!

The “CrashME” PSOD

Here is the coredump file that was transmitted. Success!

The coredump file on the vCenter server in the repository

For more information check out these KB articles:

ESXi Network Dump Collector in VMware vSphere 5.x

Configuring the Network Dump Collector service in vSphere 5.x

VMware vSphere 4.0 Update 2 is released

This evening VMware released Update 2 for ESX/ESXi 4, vCenter Management Server 4, vCenter Update Manager 4 and VMware Data Recovery.
A quick scan of the ESX 4 Update 2 release notes shows expanded support for FT on Intel i3/i5 Clarkdale, Xeon 34xx Clarkdale and Xeon 56xxx processors. Support for IOMMU on AMD Opteron 61xx and 41xx processors. Guest OS support for Ubuntu 10.04 and improvements to esxtop and resxtop to include NFS performance statistics Reads/s, Writes/s, MBRead/s, MBWrtn/s, cmd/s and gavg/s latency. Included in the resolved issues is a change in the way the Snapshot Manager “Delete All” operation works. In previous versions the snapshot farthest away from the base disk was committed to its immediate parent, then that parent would be committed to its parent until the last remaining snapshot is committed to the base. The release notes report that this operation will now start with the snapshot closest to the base disk and work toward the farthest. This should reduce the amount of disk space required during the “delete all/commit” operation and reduce the amount of data that is repeatedly committed. I think this is a great change. I have seen customers run out of space in datastores when the failed to keep track of active snapshots and didn’t understand the “delete all/commit” process.

The vCenter Management Server 4 Update 2 release notes list support for guest customization of:

◦Windows XP Professional SP2 (x64) serviced by Windows Server 2003 SP2
◦SLES 11 (x32 and x64)
◦SLES 10 SP3 (x32 and x64)
◦RHEL 5.5 Server Platform (x32 and x64)
◦RHEL 5.4 Server Platform (x32 and x64)
◦RHEL 4.8 Server Platform (x32 and 64)
◦Debian 5.0 (x32 and x64)
◦Debian 5.0 R1 (x32 and x64)
◦Debian 5.0 R2 (x32 and x64)

Among the resolved items, there is an update JRE (1.5.0_22) and number of fixed related to Host Profiles, support for vSwitch portgroup named longer than 50 characters, advanced settings to allow the use vDS connections as additional HA heartbeat networks, the addision of a parameter in vpxd.cfg to set a greater timeout value for VMotion operations involving VMs with swap files on local datastores, among many others. In the known issues section is astatement that while USB controllers can be added to VMs, attaching USB devices is not supported and that vSphere Web Access is experimentally supported.

The vCenter Update Manager 4 Update 2 release notes list improvement of operations in low bandwidth, high latency and slow networks, including a reference to KB 1017253 detailing the configuration of extended timeout values for ESX, vCenter and Update Manager Update 2.
The compatability matrix shows that Update Manager 4 Update 2 is only compatible with vCenter Management Server 4 Update 2.

VMware Data Recovery Update 2 includes the following new items:

The following enhancements have been made for this release of Data Recovery.

•File Level Restore (FLR) is now available for use with Linux.
•Each vCenter Server instance supports up to ten Data Recovery backup appliances.
•The vSphere Client plug-in supports fast switching among Data Recovery backup appliances.
•Miscellaneous vSphere Client Plug-In user interface enhancements including:
◦The means to name backup jobs during their creation.
◦Additional information about the current status of destination disks including the disk’s health and the degree of space savings provided by the deduplication store’s optimizations.
◦Information about the datastore from which virtual disks are backed up.

The support for up to 10 Data Recovery appliances per vCenter will allow up to 1000 jobs (100 per appliance x10), this is a significant increase in backup capacity.

The build numbers for the various items are:

ESX 4.0 Update 2 Build 261974
ESXi 4.0 Update 2 Installable Build 261974
ESXi 4.0 Update 2 Embedded Build 261974
VMware Tools Build 261974
vCenter Server 4.0 Update 2 Build 258672
vCenter Update Manager 4.0 Update 2 Build 264019

vSphere 4 Update 2 components can be downloaded here.

New VMware network technical papers published

Network Segmentation in Virtualized Environments

As virtualization becomes the standard infrastructure for server deployments, a growing number of organizations want to consolidate servers that belong to different trust zones. The demand is increasing for information to help network security professionals understand and mitigate the risks associated with this practice. This paper provides detailed descriptions of three different virtualized trust zone configurations and identifies best practice approaches that enable secure deployment.

DMZ Virtualization Using VMware vSphere 4 and the Cisco Nexus 1000V Virtual Switch

This paper tackles the subject of DMZ security and virtualization. It covers a number of DMZ security requirements and scenarios, presenting how vSphere users can implement the Cisco Nexus 1000V virtual switch in a DMZ.

VMware Security Advisory 2009-0008

VMware has released security advisory VMSA-2009-0008. The advisory is for a vulnerability in an MIT Kerberos 5 package in the service console. The advisory explains:

An input validation flaw in the asn1_decode_generaltime function in MIT Kerberos 5 before 1.6.4 allows remote attackers to cause a denial of service or possibly execute arbitrary code via vectors involving an invalid DER encoding that triggers a free of an uninitialized pointer. A remote attacker could use this flaw to crash a network service using the MIT Kerberos library, such as kadmind or krb5kdc, by causing it to dereference or free an uninitialized pointer or, possibly, execute arbitrary code with the privileges of the user running the service.
NOTE: ESX by default is unaffected by this issue, the daemons kadmind and krb5kdc are not installed in ESX.

The advisory goes on to state that all currently supported version of ESX (not ESXi) are affected.
For ESX 3.5 the patch: ESX 3.5.0 ESX350-200906407-SG
md5sum: 6b8079430b0958abbf77e944a677ac6b
KB Article: VMware ESX 3.5, Patch ESX350-200906407-BG: Updates krb5-libs and pam_krb5

For ESX 2.5.5, 3.0.2, 3.0.3 and 4.0 patches are pending.

You can subscribe to VMware Security announcments here: http://lists.vmware.com/mailman/listinfo/security-announce