New VMware VCP Recertification Policy

Today, VMware Certification announced a change to the VMware Certified Professional (VCP) program with the institution of a recertification policy. The recertification policy is posted here:

http://mylearn.vmware.com/mgrReg/plan.cfm?plan=46667&ui=www_cert

In the policy post, it states that any VCP certification earned before March 10, 2013 needs to be recertified by March 10, 2015. Here are the three options for recertification:

  1. Take the current exam for your existing VCP certification solution track. For example, if you are a VCP3, you could take the current VCP5-Data Center Virtualization (VCP5-DCV) exam.
  2. Earn a new VCP certification in a different solution track. For example, if you are a VCP-Cloud, you could recertify by earning VCP5-Desktop (VCP5-DT) certification.
  3. Advance to the next level by earning a VMware Certified Advanced Professional (VCAP) certification. For example, if you are a VCP5-DCV you could earn VCAP5-DCA certification.

As is described in the recertification policy, VMware has added a note to our Mylearn transcripts:

VMware_Certification_transcript_note

In my case, I earned my VCP5-DCV in September of 2011, so I need to recertify within the next year. My transcript shows the expiration date:

VMware_Certification-recert-date

Here are use cases for recertification that were provided by VMware education:

Overview_Slides_-_Recertification_pptx

I also have a VCP5-DT and a VCAP4-DCD that were earned in 2011. For me, I am looking at this as motivation to get my VCAP-DCD updated and make good on a goal of getting a VCAP-DCA before March 2015.

I support this kind of recertification requirement and appreciate the value of keeping in touch with the current technology.

 

New VMware vSphere Blog post on ESXi console lockdown

This week I am back in the classroom teaching a vSphere 5.5: Install, Configure and Manage class for VMware in Sacramento, CA. During the first few sections of the class, the ESXi user interfaces and basic configuration tasks are presented, including an overview of the tasks that can be accomplished with DCUI (Direct Console User Interface). The topic of lockdown mode is mentioned as well as how to configure an ESXI host to use Active Directory for user authentication and a little advice on user account best practices. As part of the discussion, I bring up the use of an “ESX Admins” group in Active Directory, the treatment of the Root user password as an “in case of emergency” item to be tightly controlled and the use of lockdown mode.

Today when I was leaving class, I was happy to see a new blog post from Kyle Gleed of VMware entitled: “Restricting Access to the ESXi Host Console – Revisiting Lockdown Mode” and in particular his 5 step recommendation on restricting access to ESXi with version 5.1 or later:

1. Add your ESXi hosts to Active Directory. This not only allows users to use their existing active directory accounts to manage their ESXi hosts, but it eliminates the need to create and maintain local user accounts on each host.

2. Create the “ESX Admins” Group in Active Directory and add all your admins as members to this group. By default, when an ESXi hosts is added to active directory the “ESX Admins” group is assigned full admin privileges. Note that you can change the name of the group and customize the privileges (follow the link for information on how to do this).

3. Vault the “root” password. As I noted above, root is still able to override lockdown mode so you want to limit access to this account. With ESXi versions 5.1 and beyond you can now assign full admin rights to named users so it’s no longer necessary to use the root account for day-to-day administration. Don’t disable the root account, set a complex password and lock it away in a safe so you can access it if you ever need to.

4. Set a timeout for both the ESXiShellTimeOut and the ESXiShellInteractiveTimeOut. Should you ever need to temporarily enable access the ESXi Shell via SSH it’s good to set these timeouts so these services will automatically get shutdown and idle SSH/Shell sessions terminated.

5. Enable Lockdown Mode. Enabling lockdown mode prevents non-root users from logging onto the host console directly. This forces admins to manage the host through vCenter Server. Again, should a host ever become isolated from vCenter Server you can retrieve the root password and login as root to override the lockdown mode. Again, be sure not to disable the root user . The point is not to disable root access, but rather to avoid having admins use it for their day-to-day activities.

Terrific advice and I appreciate the timing, I will definitely refer to this in class this week and in the future!

 

Over 50 Free Instructional Videos from VMware

Earlier this month, VMware launched a new site, VMwarelearning.com with 50+ technical videos. These videos offer tips, design guidelines, best practices and product feature knowledge from VMware technical experts. This is a terrific way to get valuable information and technical expertise for FREE!

Configure vSphere 5.1 for remote debug logging

Recently I have been working with customers on designs for new vSphere 5.1 installs and upgrades. As part of the design, I have been specifying the installation and configuration of the vSphere ESXi Dump Collector service on their Windows vCenter Server. The ESXi dump collector service allows the collection of the diagnostic dump information generated when an ESXi host has a critical fault and generates a “purple diagnostic screen.”

This post is a walk through of installing and configuring the ESXi Dump Collector service on vCenter and configuring an ESXi host to use it.

The Windows Server 2008 R2 VMs I use for vCenter are configured with additional drives for installing applications and storing data. In this example from my virtual lab, I have a “d:\” drive for applications and data.

Install the vSphere ESXi Dump Collector

The installer for the dump collector in included on the vCenter installer ISO image. I mount the ISO image to the Windows 2008 R2 VM where I have installed vCenter server.

Launch “autorun.exe” as an administrator.

From the VMware vCenter Installer, select “VMware vSphere ESXi Dump Collector”. Then click “Install” to begin the installation.

After the installer starts, select “English” as the language.

On the Welcome… page, click “Next >.”

On the End User Patent Agreement page, click “Next >.”
On the End User License Agreement page, select “I accept…”; click “Next >.”
On the Destination Folder page, click the “Change…” button beside “vSphere ESXi Dump Collector repository directory:”
On the Change Current Destination Folder page, change the “Folder name:” value to “d:\…”. Click “OK.”
Back on the Destination Folder page, observe that the path has been updated and click “Next >”

On the Setup Type page, select “VMware vCenter Server installation”, then click “Next >.”

On the VMware vCenter Server Information page, enter the appropriate information for connecting to vCenter. Click “Next >” to continue.

If you are using the default self-signed SSL certificate for vCenter, you will receive a message with the SHA1 thumbprint value for the vCenter server’s certificate.  Click “Yes” to trust that the certificate for connecting to the vCenter server.

You can verify the thumbprint by looking at the certificate properties on your vCenter server.  Notice that the thumbprint from the installer matches the thumbprint on the vCenter server’s certificate.

On the vSphere ESXi Dump Collector Port Settings page, click “Next >” to accept the default value of UDP port 6500.

On the vSphere ESXi Dump Collector Identification page, select the FQDN of the vCenter server and click “Next >.”

On the Ready to Install page, click “Install.”

After the installer has completed, click “Finish” on the Installer Completed page.

You can view the configured settings with the vSphere Client by selecting VMware ESXi Dump Collector from the Administration page.

You can also view the configuration with the vSphere Web Client by selecting the vCenter server, then browsing to the “Manage” tab and selecting “ESXi Dump Collector” under “Settings.”

Configuring an ESXi host to transmit the core dump over the network to the dump collector service.

Now that we have installed the dump collector service, we need to configure the ESXi hosts to send their diagnostic dump files to the vCenter server.
I set this up through the ESXi console. You will notice that I am logged in a “root” as I had not configured the ESXi host to use Active Directory authentication yet. Any user account that has the “administrator” role on the ESXi host can configure these settings.

First, checked the current coredump network configuration:

~ # esxcli system coredump network get
   Enabled: false
Host VNic:
Network Server IP:
Network Server Port: 0

Next, I confirmed the name of the vmkernel connection I planned to use: “vmk0” with the old “esxcfg-vmknic -l” command

Then, I configured the system to send coredumps over the network through the “vmk0” vmkernel port to my vCenter server’s IPv4 address at port 6500:

~ # esxcli system coredump network set –interface-name vmk0 –server-ipv4 10.0.0.51 –server-port 6500

You have to enter the interface name and server IPv4 address. The port is optional if you are using the default of 6500.

Then, I enabled the ESXi host to use the dump collector service:
~ # esxcli system coredump network set –enable true

I verified that the settings were correctly configured:
~ # esxcli system coredump network get
   Enabled: true
Host VNic: vmk0
Network Server IP: 10.0.0.51
Network Server Port: 6500

I checked to see if the server was running:
~ # esxcli system coredump network check
Verified the configured netdump server is running
~ #

Here is a screenshot of the process:

FYI, by default, the diagnostic dump file (core dump) is stored on a local disk partition of the ESXi host. You can find the local partition from the local ESXi console (if it is enabled) with the following command:

# esxcli system coredump partition get

I have highlighted the command in the figure below:

More information about managing the ESXi core dump disk partition is in the online documentation here.

Now that the vCenter server has the dump collector service installed and the ESXi host is configured to use it, I had to try it out!

Using the vsish tool and specific setting that Eric Sloof or NTPRO.NL described in his post “Lets create some Kernel Panic using vsish“, I crashed the ESXi host. As you can see in the screenshots, I was rewarded with a purple screen and success with transmitting the dump over the network to my vCenter server!

The “CrashME” PSOD

Here is the coredump file that was transmitted. Success!

The coredump file on the vCenter server in the repository

For more information check out these KB articles:

ESXi Network Dump Collector in VMware vSphere 5.x

Configuring the Network Dump Collector service in vSphere 5.x

Clearing up an AD Lightweight Directory Service error on vCenter Server systems

Recently I was onsite with a customer helping them deploy a new vSphere 5.1 environment to host a new Exchange 2010 system. As part of the deployment, we setup Alan Renouf’s vCheck 6 script and started working through the process of setting it up to run as a scheduled task. As we were manually running the task we noticed that the output showed errors every minute for the AD Web Services and AD Lightweight Directory Services (ADAM).

We found the log entries in the AD Web Services log.

A little digging uncovered that the event 1209 error is reported when there is a problem with the port numbers in the registry for AD Web Services LDAP access (389/636).
http://blogs.technet.com/b/askds/archive/2010/04/09/friday-mail-sack-while-the-ned-s-away-edition.aspx#adws

On inspection of the registry key, the “Port SSL” type is incorrect and the data is missing. According to the Technet blog post, the value type should be “REG_DWORD” and the default data is 636.

I deleted the existing incorrect value and created a new value with the REG_DWORD type and the value data of 636 decimal.

Upon checking the Windows event logs, I could see that the AD Web Services was already using the corrected value, so no service restart was required.

The next log entry displayed the VCMSDS instance and LDAP/LDAPS (SSL) ports it is configured to use.

After this vCenter system was fixed, we checked all of the other vCenter servers onsite and found that their vCenter 4.1 server they were using for non-production also had the same error. That vCenter server was running on a Windows 2003 server and we did have to stop and restart the AD Web Services service to load the corrected SSL port value and resolve the error.

Thanks to Alan Renouf and the vCheck contributors at Virtu-Al.net for grabbing and displaying this error.

VMware Partner Exchange 2010

I just booked my flight to Las Vegas for VMware’s Partner Exchange. I will be attending the partner “Post-Sales Accreditation Bootcamp” on the weekend and staying for a couple of VMware View 4 design session on Tuesday. I have a cousin who lives in Las Vegas and Friday is his birthday. If I can locate him I will look him up! Thanks to my boss for picking up the tab! I will make sure he and the rest of our company gets a great return on the investment!

New patches released for ESX

VMware released 6 patches for ESX 3.5 including:

VMware ESX 3.5, Patch ESX350-200905401-BG: Updates vmkernel and hostd RPMs Critical updates related to HA failover of VM’s on NFS datastores and invalid license issues. Host reboot required.
VMware ESX 3.5, Patch ESX350-200905402-BG: Updates VMX RPM General update to address a robustness issue with VMX. No Host reboot required.
VMware ESX 3.5, Patch ESX350-200905403-BG: Updates aacraid driver for Adaptec Replaces the Adpatec aacraid_esx30 driver to mitigate potential failure under heavy load on some IBM, SUN or Fujitsu hosts. Host reboot required.
VMware ESX 3.5, Patch ESX350-200905404-BG: Update to tzdata package Updates time zone information for changes in Brazil and Argentina. No host reboot required.
VMware ESX 3.5, Patch ESX350-200905405-BG: Updates Kernel Source and VMNIX This patch updates kernel-source and kernel-vmnix to support the aacraid software driver update. Host Reboot is required.

One of the locations that VMware lists updates is on the VMware Knowledge Base Blog.