There’s a world below clouds and enterprise environments with thousands of VMs and hundered or thousands of hosts. A world that consists of maximal three hosts. I’m working with quite a few customers, that are using VMware vSphere Essentials Plus. Those environments consist typically of two or three hosts and something between 10 and 100 VMs. Just to mention it: I don’t have any VMware vSphere Essentials customer. I can’t see any benefit for buying these license.
Some days ago I talked to a colleague from our sales team and we discussed different solutions for a customer. I will spare you the details, but we discussed different solutions and we came across PernixData FVP, HP 3PAR Adaptive Optimization, HP 3PAR Adaptive Flash Cache and DataCore SANsymphony-V. And then the question of all questions came up: “What is the difference?”.
Simplify, then add Lightness Lets talk about tiering. To make it simple: Tiering moves a block from one tier to another, depending on how often a block is accessed in a specific time.
Some days ago a colleague and I implemented a small 3-node VMware vSphere Essentials Plus cluster with a HP 3PAR StoreServ 7200c. Costs are always a sore point in SMB environments, so it should not surprise that we used iSCSI in this design. I had some doubt about using iSCSI with a HP 3PAR StoreServ, mostly because of the performance and complexity. IMHO iSCSI is more complex to implement then Fibre Channel (FC).
On February 25, 2015 PernixData released the latest version of PernixData FVP. Even if it’s only a .5 release, FVP 2.5 adds some really cool features and improvements. New features are:
Distributed Fault Tolerant Memory-Z (DFTM-Z) Intelligent I/O profiling Role-based access control (RBAC), and Network acceleration for NFS datastores Distributed Fault Tolerant Memory-Z (DFTM-Z) FVP 2.0 introduced support for server side memory as an acceleration resources. With this it was possible to use server side memroy to accelerate VM I/O operations.
A customer of mine had within 6 months twice a full database partition on a VMware vCenter Server Appliance. After the first outage, the customer increased the size of the partition which is mounted to /storage/db. Some months later, some days ago, the vCSA became unresponsive again. Again because of a filled up database partition. The customer increased the size of the database partition again (~ 200 GB!!) and today I had time to take a look at this nasty vCSA.
Disk space is rare. I only have about 1 TB of SSD storage in my lab and I don’t like to waste too much of it. My hosts use NFS to connect to my Synology NAS, and even if I use the VAAI-NAS plugin, I use thin-provisioned disks only. Thin-provisioned disks tend to grow over time. If you copy a 1 GB file into a VM and you delete this file immediately, you will find that the VMDK is increased by 1 GB.
This is not a brand new issue and it’s well discussed in the VMTN. After applying the ESXi 5.5.0 U2 patches from 15. October 2014, you may notice the following symptoms:
Some Citrix NetScaler VMs with e1000 vNICs loses network connectivity You can’t access the VM console after applying the patches VMware has released a couple of patches in October:
ESXi550-201410101-SG (esx-base) ESXi550-201410401-BG (esx-base) ESXi550-201410402-BG (misc-drivers) ESXi550-201410403-BG (sata-ahci) ESXi550-201410404-BG (xhci-xhci) ESXi550-201410405-BG (tools-light) ESXi550-201410406-BG (net-vmxnet3) More specifically, it’s the patch ESXi550-201410401-BG that is causing the problem.
A customer contacted me, because he had trouble to move a VM between two clusters. The hosts in the source cluster used vNetwork Standard Switches (vSS), the hosts in the destination cluster vNetwork Distributed Switch (dVS). Because of this, a host in the destionation cluster had an additional vSS with the same port groups, that were used in the source cluster. This configuration allowed the customer to do vMotion without shared storage between the two clusters.
The whole story began with a tweet and a picture:
Spotted Marvin on VMware campus during a break this morning "first hyperconverged infrastructure appliance " pic.twitter.com/1iIPocjREX
— Fletcher Cocquyt (@Cocquyt) June 7, 2014 This picture in combination with rumors about Project Mystic have motivated Christian Mohn to publish an interesting blog post. Today, two and a half months later, “Marvin” or project Mystic got its final name: EVO:RAIL.
What is EVO:RAIL?
In April 2014 was a bug in vSphere 5.5 U1 discovered, which can lead to APD events with NFS datastores.iSCSI, FC or FCoE aren’t affected by this bug, but potentially every NFS installation running vSphere 5.5 U1 was at risk. This bug is described in KB2076392. Luckily none of my customers ran into this bug, but this is more due to the fact, that most of my customers use FC/ FCoE or iSCSI.