Posts

VMware vSphere Metro Storage Cluster with HP 3PAR Peer Persistence – Part II

The first part of this (short) blog series covered the basics of VMware vSphere Metro Storage Cluster (vMSC) with HP 3PAR Peer Persistence. This, the second, part will cover the basic tasks to configure Peer Persistence. Please note that this blog post relies on the features and supported configurations of 3PAR OS 3.1.3! This is essential to know, because 3.1.3 got some important enhancements in respect of 3PAR Remote Copy.

Fibre-Channel zoning

On of the very first tasks is to create zones with between the Remote Copy Fibre Channel (RCFC) ports. I used two ports from a quad-port FC Adapter for Remote Copy. This matrix shows the zone members in each Fibre Channel fabric. 3PAR OS 3.1.3 supports up to four RCFC ports per node. Earlier versions of 3PAR OS only support one RCFC port per node.

Trouble due to changed vDS default security policy

A customer contacted me, because he had trouble to move a VM between two clusters. The hosts in the source cluster used vNetwork Standard Switches (vSS), the hosts in the destination cluster vNetwork Distributed Switch (dVS). Because of this, a host in the destionation cluster had an additional vSS with the same port groups, that were used in the source cluster. This configuration allowed the customer to do vMotion without shared storage between the two clusters. The setup worked fine, until the customer moved a specific VM to the new cluster and switched the port group of the VM from the vSS to the vDS: The VM lost the connect to the network. A switch back to the vSS restored network connectivity for the VM. While troubleshooting this issue I noticed that the port was blocked due to a L2 security violation.

HP StoreOnce Enterprise Manager v1.3 installation fails on non-English OS

Sometimes the easy jobs seems to be the hardest. Especially if you have to deal with high-quality software… As part of a project I had to install and configure a HP StoreOnce 4500 appliance in combination with HP Data Protector 8.12 and a StoreEver MSL2024 G3 tape-library. No big deal - until I hit the part, when I had to install HP StoreOnce Enterprise Manager v1.3 (SEM) on the new backup server. The installation failed with this error:

VMware vSphere Metro Storage Cluster with HP 3PAR Peer Persistence - Part I

The title of this blog post mentions two terms that have to be explained. First, a VMware vSphere Metro Storage Cluster (or VMware vMSC) is a configuration of a VMware vSphere cluster, that is based on a a stretched storage cluster. Secondly, HP 3PAR Peer Persistence adds functionalities to HP 3PAR Remote Copy software and HP 3PAR OS, that two 3PAR storage systems form a nearly continuous storage system. HP 3PAR Peer Persistence allows you, to create a VMware vMSC configuration and to achieve a new quality of availability and reliability.

Data Protector: Exchange 2010 database recovery from copy session fails

The recovery of an Exchange mailbox using a recovery database is usually no big deal. Simply restore the database, create a recovery database and recover the mailbox or items from the mailbox. Sometimes you have the luck that the customer has licensed the Data Protector Exchange 2010 Granular Recovery for Exchange (GRE). This was unfortunately not true in my case. Okay, so let’s do it the old way. The needed tape was available in the library and luckily it was a full backup. So I quickly added a disk to the VM and started the recovery of the database to a temporary location. At this point, the disaster took its course…

VMware jumps on the fast moving hyper-converged train

The whole story began with a tweet and a picture:

This picture  in combination with rumors about Project Mystic have motivated Christian Mohn to publish an interesting blog post. Today, two and a half months later, “Marvin” or project Mystic got its final name: EVO:RAIL.

What is EVO:RAIL?

Firstly, we have to learn a new acronym: Hyper-Converged Infrastructure Appliance (HCIA). EVO:RAIL will be exactly this: A HCIA. IMHO EVO:RAIL is VMwares try to jump on the fast moving hyper-converged train. EVO:RAIL combines different VMware products (vSphere Enterprise Plus, vCenter Server, Virtual SAN and vCenter Log Insight) along with EVO:RAIL deployment, configuration and management to a hyper-converged infrastructure appliance. Appliance? Yes, an appliance. A single stock keeping unit (SKU) including hardware, software and support. To be honest: VMware will no try to sell hardware. The hardware will be provided by partners (currently Dell, EMC, Fujitsu, Inspur, NetOne and SuperMicro).

New HP 3PAR StoreServ AFA, VMware VVols and some thoughts

On the HP Discover in June 2013 (I wrote 2014, sorry for that typo). HP has announced the HP 3PAR StoreServ 7450 All-Flash Array. To optimize the StoreServ platform for all-flash workloads, HP made some changes to the hardware of the nodes. The 7450 uses 8-core Intel Xeon CPUs instead 6-core 1.8 Ghz CPUs, the cache was doubled from 64GB to 128GB and they added some changes to the 3PAR OS: HP added additional cache flush queues to separate the flushing of cache for rotating rust and SSD devices. They also made some write I/O optimizations and added the ability to perform fragmented writes. Instead of writing 16 KB blocks, 3PAR OS is now able to write only 4 KB of a 16 KB block. This software-based changes may be used also on the 7200 and 7400. This leads to the new…

DataCore In SANsymphony-V 10: Potential for data corruption

This is only a short blog post. Just got an e-mail from the DataCore Support. They found a critical bug in SANsymphony-V 10.0.0.0 which should be fixed with Update 1. Only VMware customers are affected, because the bug is related to VMware Thin Provisioning Thresholds. Update 1 is planned for early September 2014. If you’re running SANsymphony-V 10.0.0.0 open an incident at the DataCore Support to get an available hotfix. If you have planned to update to SANsymphony-V 10, delay this update until the release of SANsymphony-V 10 Update 1.

Creating an HP IRF stack with HP 5820-24XG-SFP+ Switches

The developtment of the Intelligent Resilient Framework (IRF) goes back to H3C, a joint venture between Huawai and 3COM. With the acquisition of 3COM by HP, IRF capable products were integrated into the HP Networking product portfolio.

What is IRF?

IRF is a software-based solution to connect multiple switches together and create a logical switching devices. The idea behind IRF is to create a logical device with one control and multiple data planes. This simplifies the management and sometimes eliminates the need for technics like (R/M)STP, XRRP/ VRRP/ HSRP or similar, to create layer 2 or layer 3 redundancy for cases like a switch failure. This depends on the requirements of the network design. The master switch in an IRF stack updates the forwarding and routing table for all devices in the stack. If it fails, another switch in the stack is elected. The switches are connected with multiple high speed links (10 GbE in most cases, some entry-level switches allow 1 GbE) and use a daisy chain or ring topology. If a switch fails, even if it’s the master of the stack, the stack will operate continuously. The time for a failover is < 50ms (Source). There are another advantage: Because the stack acts like a single switch, you can use switch-assisted teaming or trunking between IRF stacks or between servers and IRF stacks.

Juniper SRX: Using CoS to manage bandwidth

Sometimes it’s necessary to limit specific traffic in terms of bandwidth. Today I like to show you how to manage bandwidth limits using QoS and firewall policies. Especially if you have only limited bandwidth, e.g. a DSL connection, it can be useful to manage the used bandwidth for specific hosts or protocols. I use a really simple setup to show you, how you can manage bandwidth using CoS on a Juniper SRX.