Storage

Conflicting information: Setting iops option for VMW_PSP_RR for HP 3PAR StoreServ on ESXi

Yesterday I received the following tweet:

Later Craig Kilborn joined the conversation and I decided to clarify this 100 or 1 IOPS myth the next morning.

In order to give you some context: I wrote a blog post about adding a custom SATP claimrule for HP 3PAR StoreServ storage on ESXi. In this blog post I pointed out, that the claim rule is usually used to change the default behaviour for switching the path for active IO. For the VMW_PSP_RR this is 1000 IOPS, which means, that after 1000 IOPS for a specific device, the path for the active IO to this device ist changed to the next active and optimized IO path. I recommend to read this blog post from Duncan Epping for more information.

Add custom SATP claimrule for HP 3PAR to VMware ESXi

One of the tasks that I finish before I present the first Virtual Volumes (VV) to hosts is to discuss the need of a custom SATP claimrule with the customer. Requirement for a custom claimrule is usually, that the active and optimized path should be switched after each IO and not after 1000 IOs. Duncan Epping wrote a nice blog post some years ago. I recommend to read it.

Some basics

The Storage Array Type Plug-In (SATP) is responsable for array-specific operations, like health monitoring of physical paths, reporting of path state changes and path failover. Each SATP is linked to a Path Selection Policy (PSP), which controls the selection of active paths for IO. VMware ESXi provides a couple of SATPs:

Some thoughts about HP 3PAR Adaptive Optimization

HP 3PAR Adaptive Optimization (AO) enables autonomic storage tiering on HP 3PAR storage arrays. With this feature the HP 3PAR storage system analyzes IO and then migrates regions of 128 MB between different storage tiers. Frequently accessed regions of volumes are moved to higher tiers, less frequently accessed regions are shifted to lower tiers. I often talk with customers about AO and I know that this feature is sometimes misunderstood and misconfigured. This blog post is a summary of in my opinion important topics.

Deploying HP StoreVirtual VSA - Part I

I would like to thank Calvin Zito for the donation of StoreVirtual NFR licenses to vExperts. This will help to spread the knowhow about this awesome product! If you are not a vExpert, you can download the StoreVirtual VSA for free and try it for 60 days. If you are a vExpert, ping Calvin on Twitter for a 1y NFR license.

This blog post covers the deployment of the current StoreVirtual VSA release (LeftHand OS 11). A second blog post covers the configuration using the CMC. Both posts are focused on LeftHand OS 11 and VMware vSphere. If you are searching for a deployment and configuration guide for LeftHand OS 9.x or 10 on VMware vSphere, take a look at this two blog posts from Craig Kilborn: Part 1 – How To Install & Configure HP StoreVirtual VSA On vSphere 5.1 & Part 2 – How To Install & Configure HP StoreVirtual VSA On vSphere 5.1. Another blog post that covers LeftHand OS 11 is from Hugo Strydom. Hugo wrote about what he did with his VSA (vExpert : What I did with my HP VSA). I wrote a blog post about the HP StoreVirtual VSA some weeks ago. If you are interested in some basics about the VSA, check my mentioned blog post.

Deploying HP StoreVirtual VSA – Part II

Part I of this series covered the deployment, part II is dedicated to the configuration of the StoreVirtual VSA cluster. I assume that the Centralized Management Console (CMC) was installed. Start the CMC. If you see no systems unter “Available Systems”, client “Find” on the menu and then choose “Find Systems…”. A dialog will appear. Click “Add…” and enter the ip address of one of the earlier deployed VSA nodes. Repeat this until all deployed VSA nodes are added. Then click “Close”. Now you should have all available VSA nodes listed under “Available Systems”.

DataCore announces SANsymphony-V10

Today DataCore announced their latest SANsymphony-V release. After the merge of SANmelody & SANsymphony, SANsymphony-V10 is the 10th generation of DataCores flagship product. Interestingly DataCore uses the terms  “software-defined” and “Virtual SAN”. Whether the product of the definition of the terms corresponds everyone should decide for themselves. But this is another story.

What is DataCore SANsymphony-V?

What DataCore definitely does is automating and simplifying storage management and provisioning. I really like it the simplicity. DataCore SANsymphony-V can deliver enterprise-class functionality, like synchronous mirroring, replication, snapshots, clones, thin-provisioning and tiering . It runs on x86 hardware with Microsoft Windows Server 2008 or 2012. Multiple servers can grouped together for load balancing and redundancy. A storage pool can created out of the internal or external flash and roting rust. Single or mirrored virtual disks can be carved out of this storage pool. Hosts can access these virtual disks using iSCSI or Fibre-Channel. Because DataCore SANsymphony-V10 can use several different technologies as backend for storage pools, it’s easy to replace backend storage. You can add or remove disks to or from storage pools. If you backend storage is an old EMC CLARiiON and you get a new HP MSA 2040 Storage, you can replance the old storage without disruption.

HP StoreVirtual VSA - An introduction

In 2008 HP acquired LeftHand Networks for “only” $360 million. In relation to the acquiration of 3PAR in 2010 ($2.35 billion) this was a  really cheap buy. LeftHand Networks was a pioneer in regard of IP based storage build on commodity server hardware. Their secret was SAN/iQ, a linux-based operating system, that did the magic. HP StoreVirtual is the TAFKAP (or Prince…? What’s his current name?) in the HP StorageWorks product familiy. ;) HP LeftHand, HP P4000 and now StoreVirtual. But the secret sauce never changed: SAN/iQ or LeftHand OS. Hardware comes and goes, but the secret of StoreVirtual was and is the operating system. And because of this it was easy for HP to bring the OS into a VM. StoreVirtual Virtual Storage Appliance (VSA) was born. So you can chose between the StoreVirtual Storage nodes (HW appliances) and the StoreVirtual VSA, the virtual storage appliance. This article will focus on the StoreVirtual VSA with LeftHand OS 11.

Simulate ONTAP 8 - An introduction

While talking with a colleague, she told me that she would like to know more about NetApp. Unfortunately we don’t have a NetApp system in our lab and playing with customer equipment is… mmh…unfavorable. But there’s a solution for this problem: Simulate ONTAP 8. This software allows you to simulate a 7-Mode or Cluster-Mode (c-Mode) system and to test many of the features. All you need is a VMware Workstation/ Player/ Fusion or an ESXi host.

Simulate ONTAP 8: Setup CIFS

This is a really short post. A first step can be the configuration of CIFS. This is done using “cifs setup” command. After you’ve setup CIFS, you can create volumes and qtrees, you can share them with you Windows server etc. It’s a good start into your Data ONTAP 8 journey.

The requirements

All you need is a configured ONTAP 8 simulator instance and a Windows Domain Controller with Active Directory.

Useful stuff about Nutanix

Nutanix was founded in 2009 and left the stealth mode in 2011. Their Virtual Computing Platform combines storage and computing resources in a building block scheme. Each appliance consists up to four nodes and local storage (SSD and rotating rust). At least three nodes are necessary to form a cluster. If you need more storage or compute resources, you can add more appliances, and thus nodes, to the cluster (scale out). Nutanix scales proportionately with cluster growth. The magic is not the hardware - it’s the software. The local storage resources of each appliance are passed to the Nutanix Controller VM (CVM). The CVM services I/O and storage to the VMs and is running on each node, regardless of the hypervisor. You can run VMware ESXi, Microsoft Hyper-V and KVM on the nodes. Although the Nutanix Distributed File System (NDFS) is stretched across all nodes, I/O for a VM is served by the local CVM. The storage can be presented via iSCSI, NFS or SMB3 to the hypervisor.