Virtualization

Top vBlog 2015 Contest has started

If you are a frequent reader of virtualization blogs, then you may have heard about the vLaunchPad. It lists hundreds of VMware & virtualization blogs, as well as links to resources and other material. The vLaunchPad is managed by Eric Siebert (@ericsiebertvsphere-land.com) and he organizes year for year the annual Top vBlog voting contest. This year the Top vBlog contest is sponsored by Infinio.

In the 2014 voting my “old” blog was voted on place 292 of 320. I should mention that blazilla.de had only german-language content. In a community, where english is the predominating content language, this result may not surprise. If you are interested in last year’s results, you can find them here. In 2014 I have started vcloudnine.de, but I didn’t nominated it for the 2014 voting. Instead, I nominated blazilla.de for the Top vBlog 2014 contest. This year the tables turned and I have nominated vcloudnine.de for the categories:

The beginning of a deep friendship: Me & PernixData FVP 2.0

I’m a bit late, but better late than never. Some days ago I installed PernixData FVP 2.0 in my lab and I’m impressed! Until this installation, solutions such as PernixData FVP or VMware vSphere Flash Read Cache (vFRC) weren’t interesting for me or most of my customers. Some of my customers played around with vFRC, but most of them decieded to add flash devices to their primary storage system and use techniques like tiering or flash cache. Especially SMB customers had no chance to use flash or RAM to accelerate their workloads because of tight budgets. With decreasing costs for flash storage, solutions like PernixData FVP and VMware vSphere Flash Read Cache (vFRC) getting more interesting for my customers. Another reason was my lab. I simply hadn’t the equipment to play around with that fancy stuff. But things have changed and now I’m ready to give it a try.

Juniper publishes vMX

This tweet from @JuniperNetworks has really inspired me yesterday. I liked Junipers Firefly Perimeter (vSRX) from the first day. I like the idea behind this product (yes, I like everything that can be run as a VM…). But yesterday Juniper has go one better.

Juniper Networks announced yesterday a virtualized and carrier-grade version of their MX Series 3D router. The Juniper Networks vMX is a virtual MX Series 3D Universal Edge Router and it’s optimized to run on x86 hardware. Juniper vMX can run on all major Hypervisors, including VMware ESXi and KVM. It was also mentioned, that vMX can be run in Docker containers or on bare-metal.

VMware disables inter VM Transparent Page Sharing (TPS) for security reasons

This morning I discovered a tweet from Derek Seaman in my timeline, that caught my attention.

TPS stands for Transparent Page Sharing and it’s one of VMware memory management technologies. VMware ESX(i) uses four different technologies to manage host and guest memory resources (check VMware KB2017642 for more information). The preference increases from TPS to swapping.

My lab network design

Inspired by Chris Wahls blog post “Building a New Network Design for the Lab”, I want to describe how my lab network designs looks like.

The requirements

My lab is separated from my home network, and it’s focused on the needs of a lab. A detailed overview about my lab can be found here. My lab is a lab and therefore I divided it into a lab, and an infrastructure part. The infrastructure part of my lab consists of devices that are needed to provide basic infrastructure and management. The other part is my playground.

VMware jumps on the fast moving hyper-converged train

The whole story began with a tweet and a picture:

This picture  in combination with rumors about Project Mystic have motivated Christian Mohn to publish an interesting blog post. Today, two and a half months later, “Marvin” or project Mystic got its final name: EVO:RAIL.

What is EVO:RAIL?

Firstly, we have to learn a new acronym: Hyper-Converged Infrastructure Appliance (HCIA). EVO:RAIL will be exactly this: A HCIA. IMHO EVO:RAIL is VMwares try to jump on the fast moving hyper-converged train. EVO:RAIL combines different VMware products (vSphere Enterprise Plus, vCenter Server, Virtual SAN and vCenter Log Insight) along with EVO:RAIL deployment, configuration and management to a hyper-converged infrastructure appliance. Appliance? Yes, an appliance. A single stock keeping unit (SKU) including hardware, software and support. To be honest: VMware will no try to sell hardware. The hardware will be provided by partners (currently Dell, EMC, Fujitsu, Inspur, NetOne and SuperMicro).

Memory management: VMware ESXi vs. Microsoft Hyper-V

Virtualization is an awesome technology. Last weeks I visited a customer and we took a walk through their data centers. While standing in one of their data centers I thought: Imagine that all server, that they are currently run as VMs, would be physical?. I’m still impressed about the influence of virtualization. The idea is so simple You share the resources of the physical hardware between multiple virtual instances. I/O, network bandwidth, CPU cycles and memory. After nearly 10 years of experience with server virtualization I can tell that especially the memory resources is one of the weak points. When a customer experiences performance problems, they were mostly caused by a  lack of storage I/O or memory.

Deploying HP StoreVirtual VSA - Part I

I would like to thank Calvin Zito for the donation of StoreVirtual NFR licenses to vExperts. This will help to spread the knowhow about this awesome product! If you are not a vExpert, you can download the StoreVirtual VSA for free and try it for 60 days. If you are a vExpert, ping Calvin on Twitter for a 1y NFR license.

This blog post covers the deployment of the current StoreVirtual VSA release (LeftHand OS 11). A second blog post covers the configuration using the CMC. Both posts are focused on LeftHand OS 11 and VMware vSphere. If you are searching for a deployment and configuration guide for LeftHand OS 9.x or 10 on VMware vSphere, take a look at this two blog posts from Craig Kilborn: Part 1 – How To Install & Configure HP StoreVirtual VSA On vSphere 5.1 & Part 2 – How To Install & Configure HP StoreVirtual VSA On vSphere 5.1. Another blog post that covers LeftHand OS 11 is from Hugo Strydom. Hugo wrote about what he did with his VSA (vExpert : What I did with my HP VSA). I wrote a blog post about the HP StoreVirtual VSA some weeks ago. If you are interested in some basics about the VSA, check my mentioned blog post.

Deploying HP StoreVirtual VSA – Part II

Part I of this series covered the deployment, part II is dedicated to the configuration of the StoreVirtual VSA cluster. I assume that the Centralized Management Console (CMC) was installed. Start the CMC. If you see no systems unter “Available Systems”, client “Find” on the menu and then choose “Find Systems…”. A dialog will appear. Click “Add…” and enter the ip address of one of the earlier deployed VSA nodes. Repeat this until all deployed VSA nodes are added. Then click “Close”. Now you should have all available VSA nodes listed under “Available Systems”.