Category Archives: vSphere 5

vSphere Hardening (Updated with General Availability)

VMware-Patches-Vulnerability

One of the more solid cases in the VMware vs Hyper-V arguments is the ‘footprint’ of the hypervisor, VMware wins in terms of sizing and claims a massive advantage in being Linux based with a smaller attack surface.  However, this is often taken as an excuse by admins to leave a default configuration on hosts and within the vSphere components used to make up a virtualisation solution.  VMware have made things a little easier for those concerned that vulnerability in even a single host can mean chaos for your virtual environment, they have now release an updated hardening guide for the following components:

  • Virtual Machines
  • ESXi hosts
  • Virtual Network
  • vCenter Server plus its database and clients.
  • vCenter Web Client
  • vCenter SSO Server
  • vCenter Virtual Appliance (VCSA) specific guidance
  • vCenter Update Manager

To directly download the guide you can use this link and this one for the change log.

VMware Blog Details Here

Thanks to Mike Foley for confirming the version status of the document being published

Advertisements

VMworld 2013 Technical Keynote Summary (from vmguru.nl)

Original post here

Today’s General Session is all about the technology behind the solutions provided by VMware and the partner eco system. Featuring Carl Eschenbach, VMware’s President and COO, along with VMware’s engineering leaders they are showcasing state-of-the-art technology that is transforming IT by virtualizing and automating compute, storage, networking and security across private and hybrid clouds.

It may be or is the largest IT event in Europe for a few years now. From 1.400  to 8.500+ now. Lets summarise yesterdays announcements and releases.

 

IMG_7538.JPG

VMware has the mission to virtualise the whole x86 datacenter. Carl says: “We are not there yet, but we are on the road towards it.”

 

Software will reduce the friction between business and IT enabling increased business value.  Increase velocity. Use more budget on innovation, less on maintenance. This requires an agile datacenter. Automate IT for IT. The world will be a Hybrid cloud in the near future. How? Through the Software-Defined Data Center. Where all infrastructure is virtualised and delivered as a service.

How? by 3 imperatives.

1. Virtualize all of IT

2. IT management gives way to automation

3. Use a compatible hybrid cloud

Carl introduces Kit Colbert – Principal Engineer on stage. Kit is showing us several new or improved products to help to achieve the 3 imperatives. And to speed up the delivery of services to the business. IT requirements are complex and besides that we need to keep the lights on so there is a lot of pressure on the IT guys.

Kit shows VMware vCloud Automation Center deployment of a project (Vulcan) through a Service Catalog on a private cloud infrastructure. After deploying the project in the private cloud he also shows us how to use the hybrid cloud option to deploy it the same way in the Hybrid Cloud. By taking all the guess work out.Where you must understand that running it in a public cloud isn’t always cheaper! While deploying it in the private cloud or the hybrid cloud, you can now use Auto-scaling options. By enabling the Enable Auto Scale feature you can choose that amount of maximum nodes. Control IT through policy to keep it simple for the business.

Governance equals control. Automation brings Self-service and governance together.

Kit shows us IT Service in action: – Self service – Transparent pricing – Governance – Automation, where we get a great cost overview in vCAC – total cloud cost – operational analysis (price per VM) – demand analysis. Even a breakdown in detail of cost by department and underlying projects.

How much does a vm cost over time. Now we can answer and give the business a lot of detail. Or even a live dashboard. Showing transparency in pricing, not only for private clouds but even for public connected. Showing demand analysis. Kit is demoing the IT Business Management Suite (ITBM) product integrated in vCAC. We see lots of products nicely integrated and working together!

Next up is the vCloud Application Director for application provisioning showing us the VMware vCloud Application Director v6.0-Beta. Showing us a detailed visual overview of the multi-tier application blueprint. Application provisioning delivers: -Streamlined app provisioning -Decoupling app &infra -Automation reduces steps, risk & faults.

Conventional networking is a barrier to quick app delivery. Kit explains networking with NSX, decoupling the logical network from the underlying hardware, giving lots more agility. With the demo, Kit shows us all networking actions in previous vCAC demo were automatically executed by NSX. Traditional network routing between VMs on the same host introduces hair pinning of network traffic NSX routes on vSphere hosts. Applications can perform faster because hair pinning is eliminated by NSX on local host.

How does NSX handle security? It also handles it at the local host level to prevent unnecessary network traffic. Also load balancing is handled the same way as routing and security. This results in faster networking & better performance & happy users. NSX can be easily implemented in existing vSphere infrastructures with traditional vSwitches without downtime!  You have mutliple options to move VMs to a NSX network layer. You can turn them off and bring them up again, but that is with downtime. But you can also use vMotion to move them through the NSX bridge to move them to the new NSX based network!! For physical servers you can also connect them through a physical NSX bridge.

Making Storage Dynamic

With just a few clicks you can create a vSAN. Making it super simple to use. In combination with NSX there will be a very fast network combining all local storage like SSD and Flash for the virtual environment, based on a Highly Resilient Architecture. Kit shows us a demo how to make storage dynamic with Software-Defined-Storage (VMware vSAN). vSAN clusters use local storage to create storage pools using SSDs & disks preserving vSphere advanced features.Adding a host in a vSAN cluster gives more compute, networking and storage capacity. Building blocks now contain storage! vSAN is highly resilient by writing multiple copies.

vSAN delivers: -storage policy set at time of provisioning -storage that scales -leverage existing DAS storage

It is time for End User Computing. End user freedom with IT control where VDI now goes mainstream! No reason NOT to virtualize desktops. Every application even high graphics workloads can be on the platform. Horizon Workspace gives users SSO to all applications and even application execution of non supported OS, like One Note or Visio on a Mac. Provisioning a virtual desktop through vCAC is now possible and even to the cloud through Desktone, which VMware acquired earlier.

Horizon Suite delivers: -Self serv to apps & data o any device -Policy, automation, security & compliance -Enhanced productivity for users & IT

These were great demos Kit thanks for showing us all the possibilities.

Carl introduces Joe Baguley (VMware CTO EMEA) on stage next.

Conventional Approach to Management is Rigid. Complex and Fragile. It is the reality because the problem lays in the system of silos for networking, storage, compute. We have to start accelerating, by starting to do it through a policy based approach. This is the VMware Approach to Management.

New approach needs policy based automation to deliver agile, scalable and simple IT infrastructures.

There are a couple of challenges by given by Carl to Joe before he can start the demo. Carl says: “Seemlesly extend to the public cloud. Workloads back into the private cloud. No change to IT policies or whatsoever.”

Joe shows us a vCAC demo but also adding to Carl not to pickup the phone yet because the Vulcan project is Red at the moment but to read the words below the pretty picture ;-) It says: Automated remediation action. System gonna fix itself.

Integration of Operations Manager and vCloud Automation Center creates a self-healing infrastructure.

Automated Operations delivers -policy driven, automated, proactive resp -Intelligent analytics -creating visibility into application health.

vCOPS has connectors for other components in infrastructure for full overview from a single pane of glass. Log Insight intelligently filters and categorises large quantities of log data. Reducing 65M events to 15 events.

The same templates used for managing VMs on premise are synchronized and available in vCloud Hybrid Service and vice versa.

It is a fundamentally different way than in the past. IT is changing welcome to the next era!

VMware commitment is…

  1. no rip and replace in the datacenter
  2. get more out of the data center.
  3. Build on existing skills.

So lets tear down the world of silo-ed IT and create an One IT team!!

This ends the keynote. Thanks all and have a great VMworld.

Paper: VMware Horizon View Large-Scale Reference Architecture

From virtualization.info (it’s a good blog that you should read!)

VMware has released a paper titled: “VMware Horizon View Large-Scale Reference Architecture“. The paper which contains 30 pages details a reference architecture based on real-world test scenarios, user workloads and infrastructure system configurations. The RA uses the VCE Vblock Specialized System for Extreme Applications, composed of Cisco UCS server blades and EMC XtremIO flash storage, to support a 7,000-user VMware Horizon View 5.2 deployment. Benchmarking was done using the Login VSI Max benchmarking suite.

clip_image001

The paper covers the following topics:

  • Executive Summary
  • Overview
    • VCE Vblock Specialized System for Extreme Applications
    • VMware Horizon View
    • Storage Components
    • Compute and Networking Components
    • Workload Generation and Measurement
  • Test Results
    • Login VSI
    • Timing Tests
    • Storage Capacity
  • System Configurations
    • vSphere Cluster Configurations
    • Networking Configurations
    • Storage Configurations
    • vSphere Configurations
    • Infrastructure and Management Servers
    • Horizon View Configuration
    • EMC XtremIO Storage Configurations

Conclusion:

Our test results demonstrate that it is possible to deliver an Ultrabook-quality user experience at scale for every desktop, with headroom for any desktop to burst to thousands of IOPS as required to drive user productivity, thanks to the EMC XtremIO storage platform, which provides considerably higher levels of application performance and lower virtual desktop costs than alternative platforms. The high performance and simplicity of the EMC XtremIO array and the value-added systems integration work provided by VCE as part of the Vblock design contributed significantly to the overall success of the project.

vSphere Multi-Core vCPU Clarifications

One of the most common misconfigurations I see in VMware environments is use of multiple cores-per socket, VMware has released a clarification post reminding people of the best practice advice (see below) and clarifying performance of multi-core vCPUs

This complements a better post by the SANMAN (who provided the graphics used below)

#1 When creating a virtual machine, by default, vSphere will create as many virtual sockets as you’ve requested vCPUs and the cores per socket is equal to one. I think of this configuration as “wide” and “flat.” This will enable vNUMA to select and present the best virtual NUMA topology to the guest operating system, which will be optimal on the underlying physical topology:

#2 When you must change the cores per socket though, commonly due to licensing constraints, ensure you mirror physical server’s NUMA topology. This is because when a virtual machine is no longer configured by default as “wide” and “flat,” vNUMA will not automatically pick the best NUMA configuration based on the physical server, but will instead honor your configuration – right or wrong – potentially leading to a topology mismatch that does affect performance:

 

Full Content of the VMware Post is below:

Does corespersocket Affect Performance?

There is a lot of outdated information regarding the use of a vSphere feature that changes the presentation of logical processors for a virtual machine, into a specific socket and core configuration. This advanced setting is commonly known as corespersocket.

It was originally intended to address licensing issues where some operating systems had limitations on the number of sockets that could be used, but did not limit core count.

KB Reference: http://kb.vmware.com/kb/1010184

It’s often been said that this change of processor presentation does not affect performance, but it may impact performance by influencing the sizing and presentation of virtual NUMA to the guest operating system.

Reference Performance Best Practices for VMware vSphere 5.5 (page 44):http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.5.pdf

Recommended Practices

#1 When creating a virtual machine, by default, vSphere will create as many virtual sockets as you’ve requested vCPUs and the cores per socket is equal to one. I think of this configuration as “wide” and “flat.” This will enable vNUMA to select and present the best virtual NUMA topology to the guest operating system, which will be optimal on the underlying physical topology.

#2 When you must change the cores per socket though, commonly due to licensing constraints, ensure you mirror physical server’s NUMA topology. This is because when a virtual machine is no longer configured by default as “wide” and “flat,” vNUMA will not automatically pick the best NUMA configuration based on the physical server, but will instead honor your configuration – right or wrong – potentially leading to a topology mismatch that does affect performance.

To demonstrate this, the following experiment was performed. Special thanks to Seongbeom for this test and the results.

Test Bed

Dell R815 AMD Opteron 6174 based server with 4x physical sockets by 12x cores per processor = 48x logical processors.

TestBed

The AMD Opteron 6174 (aka Magny-Cours) processor is essentially two 6 core Istanbul processors assembled into a single socket. This architecture means that each physical socket is actually two NUMA nodes. So this server actually has 8x NUMA nodes and not four, as some may incorrectly assume.

Within esxtop, we can validate the total number of physical NUMA nodes that vSphere detects.

8NUMANodes

Test VM Configuration #1 – 24 sockets by 1 core per socket (“Wide” and “Flat”)

VMTest1

Since this virtual machine requires 24 logical processors, vNUMA automatically creates the smallest topology to support this requirement being 24 cores, which means 2 physical sockets, and therefore a total of 4 physical NUMA nodes.

Within the Linux based virtual machine used for our testing, we can validate what vNUMA presented to the guest operating system by using: numactl –hardware

VMTest1NUMA

Next, we ran an in-house micro-benchmark, which exercises processors and memory. For this configuration we see a total execution time of 45 seconds.

VMTest1Result

Next let’s alter the virtual sockets and cores per socket of this virtual machine to generate another result for comparison.

Test VM Configuration #2 – 2 sockets by 12 cores per socket

VMTest2

In this configuration, while the virtual machine is still configured have a total of 24 logical processors, we manually intervened and configured 2 virtual sockets by 12 cores per socket. vNUMA will no longer automatically create the topology it thinks is best, but instead will respect this specific configuration and present only two virtual NUMA nodes as defined by our virtual socket count.

Within the Linux based virtual machine, we can validate what vNUMA presented to the guest operating system by using: numactl –hardware

TestVM2NUMA

Re-running the exact same micro-benchmark we get an execution time of 54 seconds.

TestVM2Result

This configuration, which resulted in a non-optimal virtual NUMA topology, incurred a 17% increase in execution time.

Test VM Configuration #3 – 1 socket by 24 cores per socket

TestVM3

In this configuration, while the virtual machine is again still configured have a total of 24 logical processors, we manually intervene and configured 1 virtual socket by 24 cores per socket. Again, vNUMA will no longer automatically create the topology it thinks is best, but instead will respect this specific configuration and present only one NUMA node as defined by our virtual socket count.

Within the Linux based virtual machine, we can validate what vNUMA presented to the guest operating system by using: numactl –hardware

TestVM3NUMA

Re-running the micro-benchmark one more time we get an execution time of 65 seconds.

TestVM3Result

This configuration, with yet a different non-optimal virtual NUMA topology, incurred a 31% increase in execution time.

To summarize, this test demonstrates that changing the corespersocket configuration of a virtual machine does indeed have an impact on performance in the case when the manually configured virtual NUMA topology does not optimally match the physical NUMA topology.

The Takeaway

Always spend a few minutes to understand your physical servers NUMA topology and leverage that when rightsizing your virtual machines.

Other Great References:

The CPU Scheduler in VMware vSphere 5.1

Check out VSCS4811 Extreme Performance Series: Monster Virtual Machines in VMworld Barcelona

vSphere Blog Details Replication Improvements in 5.5

From this vSphere Blog entry by Ken Warneburg:

The changes they’ve made fall into three main areas:

  1. Improved buffering algorithms at the source hosts resulting in better read performance with less load on the host and better network transfer performance
  2. More efficient TCP algorithms at the source site resulting in better latency handling
  3. More efficient buffering algorithms at the target site resulting in better write performance with less load on the host

Let’s look at each of these in a bit more detail:

Source improvements

The way blocks are queued up for transfer has changed slightly from the past iterations where TransferDiskMaxBufferCount & TransferDiskMaxExtentCount were the primary throttling mechanisms for reading and sending changed blocks.

Now, we use a global heap on each host to hold the blocks that have been identified to be sent.  We create a heap of appropriate size to hold blocks for potentially the maximum number of VMs on a host.  This is dynamically sized according to the host memory size and the maximum number of VMs on the host, but roughly it sizes to a bit more than 3MB of memory.  We then load that heap with changed blocks identified by the agent that tracks the blocks as they change.

The way the blocks are read into the heap is in essence by a round robin among the replicated disks, grabbing up to 16 of the changed blocks at a time, and there is a maximum number of IOPS set per host to 1024 to ensure this cyclical reading doesn’t overburden anything.

Blocks are then shipped from the heap to the VR Server on the remote site, with a maximum of 64 extents still “in-flight” that have not been acknowledged as written.  As those blocks come back acknowledged, the agent is free to send more from the heap.

The net result is that this is a much more efficient mechanism as we can load and send from a global heap rather than treating each VM as its own object.  Fundamentally this leads to a greater overall efficiency of the VR resource manager, and allows getting data to the VR Server faster.

TCP Changes

The TCP algorithms at the source site have been changed to using a CUBIC[3] based transport.  This is a fairly minor change, but has very good impact for use on long fat networks, as we see often on higher latency yet still high bandwidth connections that people often use for replication.  It uses much smarter means of determining factors like TCP window size based on accelerating probing over time and specifically looking at factors like time since last congestion event.  It will also size the TCP window independently of ACKs.

All around this makes things much more efficient for data sends across higher latency networks, where bandwidth is less an issue than RTT.

Recipient VR Appliance Changes

Vast improvements have been made to the way the vSphere Replication Appliance receives and writes out the changed blocks, by making some small but very clever adaptations:

The biggest change is by switching the way the appliance sends its writes to the disk with Network File Copy.  We now use buffered writes instead of direct IO.  Direct IO requires opening the target disk, writing an extent, waiting for write acknowledgement, moving on to the next write, etc.  Instead, with buffered writes, the VRA will open the target disk in buffered mode and write using NFC with a single sync flag at the end of the write.  In essence these are async writes with a sync write at the end of each ‘transaction’ with the disk. This is a considerably quicker way for VR to do NFC, with no penalty in host performance, and still maintaining data integrity.  This gets things to disk much quicker, and provides a huge leap in performance, as we can now acknowledge a whole bunch of writes with one transaction.

A further change is by using “coalescing buffers” to consolidate contiguous blocks on the appliance before performing a single NFC stream rather than doing each extent in isolation.  In 5.1 for example, if there are 128 contiguous 8k writes they would be sent as one NFC transaction, but would be issued as 128 writes to the kernel.  In 5.5 if they are contiguous blocks, they are coaslesced into a single write transaction that NFC issues to the kernel.  This provides less disk and host overhead, and again gets things to disk much quicker.

Coupling coalesced buffers with buffered writes and a larger amount of cached data gets much faster writes from the VR Appliance to the host’s target disk.

So that’s what has been changed to improve performance, but what can we now expect in terms of throughput?  Coming up soon in another blog post, I’ll have some sample data from my labs, and a few warnings about the impact of this.  As an anecdotal tease though, I’m seeing roughly 40Mbps for a single VM…

VMware Virtual SAN Beta Available

VMware has released the beta of Virtual SAN,

Full text taken from this post is below:

The VMware Virtual SAN™ beta is available now for download. To claim your copy, visit the VMware Virtual SAN Community page. Our VMware Virtual SAN Community is for MyVMware members only — to gain access, see steps at bottom of this email.*

Share Your Experience

We seek your feedback, positive or negative, about the VMware Virtual SAN Beta. Give feedback and ask questions via the discussion threads on the VMware Virtual SAN Community page. Get answers fast from our product experts.

Be sure to see the How-To Videos, Interactive Demos, Product Documentation, and FAQs that are also available at the VMware Virtual SAN Community.

Rewards for VMware Virtual SAN Community Members

We value your feedback! Take part in raffles and contests that reward engaged users in our VMware Virtual SAN Community. Enjoy giveaways such as iPads, Amazon gift cards and more. Stay tuned to the VMware Virtual SAN Community for more details.

Webinar: How to Install, Configure and Manage VMware Virtual SAN

Save time and gain valuable insight from our Senior Technical Marketing Architect, Cormac Hogan.  Webinar date:Wednesday, 2 October at 8:30 am PST. Link to the webinar will be available on the community’s website.

Thank you for your interest! We look forward to your feedback regarding the VMware Virtual SAN Beta from VMware.

The VMware Virtual SAN Team
Chat with us on Twitter: https://twitter.com/VMwareVSAN
Contact us: vsanbeta@vmware.com

* How to access the VMware Virtual SAN Beta community website

  1. Register for a My VMware account here (If you already have one skip to next step).
  2. Sign the terms of use here.
  3. Access the VMware Virtual SAN Community page.
  4. Bookmark that link so you can return and participate in our VMware Virtual SAN Community

vSphere & ESXi 5.5 Download Available Now

Thanks to the Yellow Bricks blog for letting me know that the binaries are now available, links here:

Core vSphere and automation/tools:

Suite components: