Today I was at a customers site. My attention was initially directed on a vCOps deployment. vCOps is a good startpoint if you need a quick overview over a vSphere environment. Unfortunately vCOps wasn’t working any more. The license was expired and the login page wasn’t accessable, but the admin login page was workingI restarted the vApp but this doesn’t solve the problem. The customer owns a VMware vSphere with Operations Management Enterprise Plus license and it would be a shame, if he wouldn’t use vCOps in his environment (> 15 hosts).
Today I changed the SCSI controller type for my Windows VMs in my lab from LSI SAS to PVSCSI. Because the VMs were installed with LSI SAS, I used the procedure described in VMware KB1010398 (Configuring disks to use VMware Paravirtual SCSI (PVSCSI) adapters) to change the SCSI controller type. The main problem is, that Windows doesn’t have a driver for the PVSCSI installed. You can force the installation of the driver using this procedure (taken from KB1010398):
This morning I discovered a tweet from Derek Seaman in my timeline, that caught my attention.
Doing a #VCDX design? Take note of TPS being disabled in all future ESXi releases. http://t.co/qRXlABIJSs
— DΞRΞK SΞAMAN (@vDerekS) October 17, 2014 TPS stands for Transparent Page Sharing and it’s one of VMware memory management technologies. VMware ESX(i) uses four different technologies to manage host and guest memory resources (check VMware KB2017642 for more information).
As part of a project, old server hardware was replaced with shiny new hardware. Beside the server hardware, storage hardware and infrastructure was also replaced. The new hardware was installed beside the old hardware and because the customer has a high virtualization ratio, nearly all servers were VMs and the migration of the VMs was was done without downtime. The customer uses a Windows 2008 R2 failover cluster for file services and MS SQL Server.
While I was onsite at a customer to decommission an old storage system, one of my very first tasks was to unmount and detach some old datastores. No big deal, until I saw that one after one ESXi hosts went to “not responding”. Time for a heart attack but hey: Why should a host ran into a PDL/ APD, while I was dismounting datastores on the vSphere layer? The LUNs were still there and accessible.
The first part of this (short) blog series covered the basics of VMware vSphere Metro Storage Cluster (vMSC) with HP 3PAR Peer Persistence. This, the second, part will cover the basic tasks to configure Peer Persistence. Please note that this blog post relies on the features and supported configurations of 3PAR OS 3.1.3! This is essential to know, because 3.1.3 got some important enhancements in respect of 3PAR Remote Copy.
A customer contacted me, because he had trouble to move a VM between two clusters. The hosts in the source cluster used vNetwork Standard Switches (vSS), the hosts in the destination cluster vNetwork Distributed Switch (dVS). Because of this, a host in the destionation cluster had an additional vSS with the same port groups, that were used in the source cluster. This configuration allowed the customer to do vMotion without shared storage between the two clusters.
The title of this blog post mentions two terms that have to be explained. First, a VMware vSphere Metro Storage Cluster (or VMware vMSC) is a configuration of a VMware vSphere cluster, that is based on a a stretched storage cluster. Secondly, HP 3PAR Peer Persistence adds functionalities to HP 3PAR Remote Copy software and HP 3PAR OS, that two 3PAR storage systems form a nearly continuous storage system. HP 3PAR Peer Persistence allows you, to create a VMware vMSC configuration and to achieve a new quality of availability and reliability.
The whole story began with a tweet and a picture:
Spotted Marvin on VMware campus during a break this morning "first hyperconverged infrastructure appliance " pic.twitter.com/1iIPocjREX
— Fletcher Cocquyt (@Cocquyt) June 7, 2014 This picture in combination with rumors about Project Mystic have motivated Christian Mohn to publish an interesting blog post. Today, two and a half months later, “Marvin” or project Mystic got its final name: EVO:RAIL.
What is EVO:RAIL?
On the HP Discover in June 2013 (I wrote 2014, sorry for that typo). HP has announced the HP 3PAR StoreServ 7450 All-Flash Array. To optimize the StoreServ platform for all-flash workloads, HP made some changes to the hardware of the nodes. The 7450 uses 8-core Intel Xeon CPUs instead 6-core 1.8 Ghz CPUs, the cache was doubled from 64GB to 128GB and they added some changes to the 3PAR OS: HP added additional cache flush queues to separate the flushing of cache for rotating rust and SSD devices.