Category Archives: ESXi 5.5

vCenter Server Appliance 5.5 root account locked out after password expiration

A new VMware KB article has been published which could potentially have wide spread impact, particularly in lab, development, or proof of concept environments.  The VMware KB article number is 2069041 and it is titled The vCenter Server Appliance 5.5 root account locked out after password expiration.

In summary, the root account of the vCenter Server Appliance version 5.5 becomes locked out 90 days after deployment or root account password change.  This behavior is by design which follows a security best practice of password rotation.  In this case, the required password rotation interval is 90 days after which the account will be forcefully locked out if not changed.

The KB article describes processes to prevent a forced lockout as well as unlocking a locked out root account.

Approximately 90 days have elapsed since the release of vSphere 5.5 and I imagine this issue will quickly begin surfacing in large numbers where the vCenter Server Appliance 5.5 has been deployed using system defaults.

Release: VMware vCenter Server 5.5.0.b & vCenter Server Appliance 5.5.0.b

On December 22 VMware released vCenter Server 5.5.0.b as well as vCenter Server Appliance 5.5.0.b.

This release (version b) does not contain new features or enhancements but contains resolved issues only.

The release notes are here.

Download here

Resolved issues

Upgrade and Installation

  • Upgrading the vSphere Web Client to vSphere 5.5 fails with an error
    Attempts to upgrade the vSphere Web Client to vSphere 5.5 fails when they are installed in a custom, non-default location. An error message similar to the following is displayed:Error 29107. The service or solution user already registered...This issue is resolved in this release.

vCenter Server, vSphere Client, and vSphere Web Client

  • vCenter Server 5.5 displays a warning message in the yellow configuration issues box on the Summary tab of the hosts
    When you connect to VMware vCenter Server 5.5 using the vSphere Client or the vSphere Web Client, the Summary tab of the ESXi 5.5 host displays a warning message similar to the following in the yellow configuration issues box:Quick stats on <hostname> is not up-to-dateThis issue is resolved in this release.
  • Attempts to export log bundle using the Log Browser fails
    When you attempt to export a log bundle using the Log Browser interface, the browser window displays a Secure Connection Failed error page with the following message:Your certificate contains the same serial number as another certificate issued by the certificate authority. Please get a new certificate containing a unique serial number.This issue is resolved in this release.

    Note: If you are upgrading from the vCenter Server 5.5 version, see the workaroundprovided in the known issues section.

  • Attempts to log in to vSphere Web Client result in an error message
    Attempts to log in to the vSphere Web Client result in an error message if the Local OS identity source is not configured in the SSO configuration page. An error message similar to the following is displayed:Failed to authenticate to inventory service <server IP>:10443This issue is resolved in this release.

vCenter Single Sign-On

  • After installation of vCenter Single Sign-On 5.5 attempts to connect to the vCenter Single Sign-On server might fail
    After you install the vCenter Single Sign-On 5.5 on a Windows system that is not domain joined and has multiple network interfaces, attempts to connect to the SSO server from other components might fail. You might see a message similar to the following:Could not connect vCenter Single Sign On....make sure the IP address is specified in the URL.This issue is resolved in this release.
  • When upgrading from vCenter Server Appliance 5.0.x to 5.5, vCenter Server fails to start if you select an external vCenter Single Sign-On
    If you select an external vCenter Single Sign-On instance while upgrading the vCenter Server Appliance from 5.0.x to 5.5, vCenter Server fails to start after the upgrade. In the appliance management interface, vCenter Single Sign-On is listed as Not configured.This issue is resolved in this release.
  • Attempts to upgrade vCenter Single Sign-On to 5.5 fails if the SSL certificates being in PKCS12 format
    When you upgrade from vCenter Single Sign-On 5.1 to 5.5, the installer fails and rolls back, before you select an SSO deployment method, with the following error message:vCenter Single Sign-On Setup Wizard ended prematurely because of an errorAn error message similar to the following is logged in the vim-sso-msi.log file:

    DEBUG: Error 2896: Executing action ExtractKeystoreInfo failed.

    This issue is resolved in this release by displaying the following warning message:

    Setup has detected a problem with your current configuration which will cause upgrade to fail. Your certificate key store format might be unsupported. See VMware KB 2061404.

  • Upgrade to vCenter Single Sign-On 5.5 fails before you select the deployment type
    Attempts to upgrade to vCenter Single Sign-On 5.5 fail before you can select the deployment type. Error messages similar to the following are logged in the vminst.logfile:VMware Single Sign-On-build-1302472: 09/26/13 15:18:19 VmSetupMsiIsVC50Installed exit: Error = 1605
    VMware Single Sign-On-build-1302472: 09/26/13 15:18:19 VmSetupGetMachineInfo exit: Error code = 1605This issue is resolved in this release.
  • Active Directory is not added automatically as identity resource in vCenter Single Sign-On 
    When you initially install the vCenter Single Sign-On in a Windows system that is part of an Active Directory, the Active Directory is not automatically added as the default identity resource in the vCenter Single Sign-On server.This issue is resolved in this release.
  • Unable to edit a vSphere 5.5 Identity Source in the vSphere Web Client
    You cannot edit a vSphere 5.5 Identity Source in the vSphere Web Client as the Edit icon is disabled for the Active Directory Identity Source. You see a warning message similar to the following:Edit Identity Source (Not available)This issue is resolved in this release.
  • Unable to modify password or remove users from system-domain after you upgrade from vCenter Server 5.1 to 5.5
    After you upgrade from vCenter Server 5.1 to vCenter Server 5.5, you are unable to remove the users from system-domain or modify the password for these users.This issue is resolved in this release.
  • vCenter Single Sign-On Identity Management service repeatedly logs the message: Enumerate trusts failed Failed to enumerate domain trusts for domain_name (dwError – 5)
    When you configure the Active Directory (Integrated Windows Authentication) Identity Source in vCenter Single Sign-On, the vmware-sts-idmd.log file, located atC:\ProgramData\VMware\CIS\logs\vmware-sso, repeatedly logs the following message:INFO [ActiveDirectoryProvider] Enumerate trusts failed Failed to enumerate domain trusts for domain_name(dwError - 5)This issue is resolved in this release.

Virtual Machine Management

  • Attempts to clone, create, or storage vMotion a virtual machine fails when the destination data store is a Storage DRS POD
    In vCenter Server, operations that consume storage space such as virtual machine creation, cloning, or storage vMotion might fail if the destination data store is a Storage DRS POD and the storage device has the de-duplication feature turned on. The following error is displayed:Insufficient disk space on datastore xxxxxThis issue is resolved in this release.

Free ESXi (vSphere) Hypervisor Limitations Removal

Taken from vmguru.nl’s Excellent Site

Post in Full:

Last week I ran into another discussion about the hypervisor under a XenApp deployment it had to be free or very cheap. So the customer was thinking about loading Hyper-V below it. Ok can be a viable option but the admins hoped it would be VMware ESX because they know that hypervisor and it has never let them down in the past six years. So I got the question what is possible, can we use the Free vSphere Hypervisor?  I than remembered from VMworld San Francisco 2013 the limitations of the Free vSphere Hypervisor have been lifted.

So now you can use the vSphere Hypervisor 5.5 with:

  • Unlimited number of cores per physical CPU
  • Unlimited number of physical CPUs per host
  • Maximum eight vCPUs per virtual machine
  • But most important the limitation of 32GB RAM per server/host has been removed from the free Hypervisor.

So now you can use it below a XenApp deployment or in a stack where you do not need DRS, HA and vMotion. If you do need a central management solution you can use the Essentials Kit and if you need DRS, HA, vMotion etc. you can use the vSphere 5.5 essentials kit it is for max. 3 servers with 2 physical CPUs per Server.

EssentialsKit edition.png

In Europe the Essentials Kit will cost 690 Euro for 3 years and the Essentials Plus Kit will cost 5.554 euro for 3 years. If you want to have support on your VMware vSphere Hypervisor you can now purchase Per Incident Support for it.

vSphere Hardening (Updated with General Availability)

VMware-Patches-Vulnerability

One of the more solid cases in the VMware vs Hyper-V arguments is the ‘footprint’ of the hypervisor, VMware wins in terms of sizing and claims a massive advantage in being Linux based with a smaller attack surface.  However, this is often taken as an excuse by admins to leave a default configuration on hosts and within the vSphere components used to make up a virtualisation solution.  VMware have made things a little easier for those concerned that vulnerability in even a single host can mean chaos for your virtual environment, they have now release an updated hardening guide for the following components:

  • Virtual Machines
  • ESXi hosts
  • Virtual Network
  • vCenter Server plus its database and clients.
  • vCenter Web Client
  • vCenter SSO Server
  • vCenter Virtual Appliance (VCSA) specific guidance
  • vCenter Update Manager

To directly download the guide you can use this link and this one for the change log.

VMware Blog Details Here

Thanks to Mike Foley for confirming the version status of the document being published

VMworld 2013 Technical Keynote Summary (from vmguru.nl)

Original post here

Today’s General Session is all about the technology behind the solutions provided by VMware and the partner eco system. Featuring Carl Eschenbach, VMware’s President and COO, along with VMware’s engineering leaders they are showcasing state-of-the-art technology that is transforming IT by virtualizing and automating compute, storage, networking and security across private and hybrid clouds.

It may be or is the largest IT event in Europe for a few years now. From 1.400  to 8.500+ now. Lets summarise yesterdays announcements and releases.

 

IMG_7538.JPG

VMware has the mission to virtualise the whole x86 datacenter. Carl says: “We are not there yet, but we are on the road towards it.”

 

Software will reduce the friction between business and IT enabling increased business value.  Increase velocity. Use more budget on innovation, less on maintenance. This requires an agile datacenter. Automate IT for IT. The world will be a Hybrid cloud in the near future. How? Through the Software-Defined Data Center. Where all infrastructure is virtualised and delivered as a service.

How? by 3 imperatives.

1. Virtualize all of IT

2. IT management gives way to automation

3. Use a compatible hybrid cloud

Carl introduces Kit Colbert – Principal Engineer on stage. Kit is showing us several new or improved products to help to achieve the 3 imperatives. And to speed up the delivery of services to the business. IT requirements are complex and besides that we need to keep the lights on so there is a lot of pressure on the IT guys.

Kit shows VMware vCloud Automation Center deployment of a project (Vulcan) through a Service Catalog on a private cloud infrastructure. After deploying the project in the private cloud he also shows us how to use the hybrid cloud option to deploy it the same way in the Hybrid Cloud. By taking all the guess work out.Where you must understand that running it in a public cloud isn’t always cheaper! While deploying it in the private cloud or the hybrid cloud, you can now use Auto-scaling options. By enabling the Enable Auto Scale feature you can choose that amount of maximum nodes. Control IT through policy to keep it simple for the business.

Governance equals control. Automation brings Self-service and governance together.

Kit shows us IT Service in action: – Self service – Transparent pricing – Governance – Automation, where we get a great cost overview in vCAC – total cloud cost – operational analysis (price per VM) – demand analysis. Even a breakdown in detail of cost by department and underlying projects.

How much does a vm cost over time. Now we can answer and give the business a lot of detail. Or even a live dashboard. Showing transparency in pricing, not only for private clouds but even for public connected. Showing demand analysis. Kit is demoing the IT Business Management Suite (ITBM) product integrated in vCAC. We see lots of products nicely integrated and working together!

Next up is the vCloud Application Director for application provisioning showing us the VMware vCloud Application Director v6.0-Beta. Showing us a detailed visual overview of the multi-tier application blueprint. Application provisioning delivers: -Streamlined app provisioning -Decoupling app &infra -Automation reduces steps, risk & faults.

Conventional networking is a barrier to quick app delivery. Kit explains networking with NSX, decoupling the logical network from the underlying hardware, giving lots more agility. With the demo, Kit shows us all networking actions in previous vCAC demo were automatically executed by NSX. Traditional network routing between VMs on the same host introduces hair pinning of network traffic NSX routes on vSphere hosts. Applications can perform faster because hair pinning is eliminated by NSX on local host.

How does NSX handle security? It also handles it at the local host level to prevent unnecessary network traffic. Also load balancing is handled the same way as routing and security. This results in faster networking & better performance & happy users. NSX can be easily implemented in existing vSphere infrastructures with traditional vSwitches without downtime!  You have mutliple options to move VMs to a NSX network layer. You can turn them off and bring them up again, but that is with downtime. But you can also use vMotion to move them through the NSX bridge to move them to the new NSX based network!! For physical servers you can also connect them through a physical NSX bridge.

Making Storage Dynamic

With just a few clicks you can create a vSAN. Making it super simple to use. In combination with NSX there will be a very fast network combining all local storage like SSD and Flash for the virtual environment, based on a Highly Resilient Architecture. Kit shows us a demo how to make storage dynamic with Software-Defined-Storage (VMware vSAN). vSAN clusters use local storage to create storage pools using SSDs & disks preserving vSphere advanced features.Adding a host in a vSAN cluster gives more compute, networking and storage capacity. Building blocks now contain storage! vSAN is highly resilient by writing multiple copies.

vSAN delivers: -storage policy set at time of provisioning -storage that scales -leverage existing DAS storage

It is time for End User Computing. End user freedom with IT control where VDI now goes mainstream! No reason NOT to virtualize desktops. Every application even high graphics workloads can be on the platform. Horizon Workspace gives users SSO to all applications and even application execution of non supported OS, like One Note or Visio on a Mac. Provisioning a virtual desktop through vCAC is now possible and even to the cloud through Desktone, which VMware acquired earlier.

Horizon Suite delivers: -Self serv to apps & data o any device -Policy, automation, security & compliance -Enhanced productivity for users & IT

These were great demos Kit thanks for showing us all the possibilities.

Carl introduces Joe Baguley (VMware CTO EMEA) on stage next.

Conventional Approach to Management is Rigid. Complex and Fragile. It is the reality because the problem lays in the system of silos for networking, storage, compute. We have to start accelerating, by starting to do it through a policy based approach. This is the VMware Approach to Management.

New approach needs policy based automation to deliver agile, scalable and simple IT infrastructures.

There are a couple of challenges by given by Carl to Joe before he can start the demo. Carl says: “Seemlesly extend to the public cloud. Workloads back into the private cloud. No change to IT policies or whatsoever.”

Joe shows us a vCAC demo but also adding to Carl not to pickup the phone yet because the Vulcan project is Red at the moment but to read the words below the pretty picture ;-) It says: Automated remediation action. System gonna fix itself.

Integration of Operations Manager and vCloud Automation Center creates a self-healing infrastructure.

Automated Operations delivers -policy driven, automated, proactive resp -Intelligent analytics -creating visibility into application health.

vCOPS has connectors for other components in infrastructure for full overview from a single pane of glass. Log Insight intelligently filters and categorises large quantities of log data. Reducing 65M events to 15 events.

The same templates used for managing VMs on premise are synchronized and available in vCloud Hybrid Service and vice versa.

It is a fundamentally different way than in the past. IT is changing welcome to the next era!

VMware commitment is…

  1. no rip and replace in the datacenter
  2. get more out of the data center.
  3. Build on existing skills.

So lets tear down the world of silo-ed IT and create an One IT team!!

This ends the keynote. Thanks all and have a great VMworld.

Paper: VMware Horizon View Large-Scale Reference Architecture

From virtualization.info (it’s a good blog that you should read!)

VMware has released a paper titled: “VMware Horizon View Large-Scale Reference Architecture“. The paper which contains 30 pages details a reference architecture based on real-world test scenarios, user workloads and infrastructure system configurations. The RA uses the VCE Vblock Specialized System for Extreme Applications, composed of Cisco UCS server blades and EMC XtremIO flash storage, to support a 7,000-user VMware Horizon View 5.2 deployment. Benchmarking was done using the Login VSI Max benchmarking suite.

clip_image001

The paper covers the following topics:

  • Executive Summary
  • Overview
    • VCE Vblock Specialized System for Extreme Applications
    • VMware Horizon View
    • Storage Components
    • Compute and Networking Components
    • Workload Generation and Measurement
  • Test Results
    • Login VSI
    • Timing Tests
    • Storage Capacity
  • System Configurations
    • vSphere Cluster Configurations
    • Networking Configurations
    • Storage Configurations
    • vSphere Configurations
    • Infrastructure and Management Servers
    • Horizon View Configuration
    • EMC XtremIO Storage Configurations

Conclusion:

Our test results demonstrate that it is possible to deliver an Ultrabook-quality user experience at scale for every desktop, with headroom for any desktop to burst to thousands of IOPS as required to drive user productivity, thanks to the EMC XtremIO storage platform, which provides considerably higher levels of application performance and lower virtual desktop costs than alternative platforms. The high performance and simplicity of the EMC XtremIO array and the value-added systems integration work provided by VCE as part of the Vblock design contributed significantly to the overall success of the project.

vSphere Multi-Core vCPU Clarifications

One of the most common misconfigurations I see in VMware environments is use of multiple cores-per socket, VMware has released a clarification post reminding people of the best practice advice (see below) and clarifying performance of multi-core vCPUs

This complements a better post by the SANMAN (who provided the graphics used below)

#1 When creating a virtual machine, by default, vSphere will create as many virtual sockets as you’ve requested vCPUs and the cores per socket is equal to one. I think of this configuration as “wide” and “flat.” This will enable vNUMA to select and present the best virtual NUMA topology to the guest operating system, which will be optimal on the underlying physical topology:

#2 When you must change the cores per socket though, commonly due to licensing constraints, ensure you mirror physical server’s NUMA topology. This is because when a virtual machine is no longer configured by default as “wide” and “flat,” vNUMA will not automatically pick the best NUMA configuration based on the physical server, but will instead honor your configuration – right or wrong – potentially leading to a topology mismatch that does affect performance:

 

Full Content of the VMware Post is below:

Does corespersocket Affect Performance?

There is a lot of outdated information regarding the use of a vSphere feature that changes the presentation of logical processors for a virtual machine, into a specific socket and core configuration. This advanced setting is commonly known as corespersocket.

It was originally intended to address licensing issues where some operating systems had limitations on the number of sockets that could be used, but did not limit core count.

KB Reference: http://kb.vmware.com/kb/1010184

It’s often been said that this change of processor presentation does not affect performance, but it may impact performance by influencing the sizing and presentation of virtual NUMA to the guest operating system.

Reference Performance Best Practices for VMware vSphere 5.5 (page 44):http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.5.pdf

Recommended Practices

#1 When creating a virtual machine, by default, vSphere will create as many virtual sockets as you’ve requested vCPUs and the cores per socket is equal to one. I think of this configuration as “wide” and “flat.” This will enable vNUMA to select and present the best virtual NUMA topology to the guest operating system, which will be optimal on the underlying physical topology.

#2 When you must change the cores per socket though, commonly due to licensing constraints, ensure you mirror physical server’s NUMA topology. This is because when a virtual machine is no longer configured by default as “wide” and “flat,” vNUMA will not automatically pick the best NUMA configuration based on the physical server, but will instead honor your configuration – right or wrong – potentially leading to a topology mismatch that does affect performance.

To demonstrate this, the following experiment was performed. Special thanks to Seongbeom for this test and the results.

Test Bed

Dell R815 AMD Opteron 6174 based server with 4x physical sockets by 12x cores per processor = 48x logical processors.

TestBed

The AMD Opteron 6174 (aka Magny-Cours) processor is essentially two 6 core Istanbul processors assembled into a single socket. This architecture means that each physical socket is actually two NUMA nodes. So this server actually has 8x NUMA nodes and not four, as some may incorrectly assume.

Within esxtop, we can validate the total number of physical NUMA nodes that vSphere detects.

8NUMANodes

Test VM Configuration #1 – 24 sockets by 1 core per socket (“Wide” and “Flat”)

VMTest1

Since this virtual machine requires 24 logical processors, vNUMA automatically creates the smallest topology to support this requirement being 24 cores, which means 2 physical sockets, and therefore a total of 4 physical NUMA nodes.

Within the Linux based virtual machine used for our testing, we can validate what vNUMA presented to the guest operating system by using: numactl –hardware

VMTest1NUMA

Next, we ran an in-house micro-benchmark, which exercises processors and memory. For this configuration we see a total execution time of 45 seconds.

VMTest1Result

Next let’s alter the virtual sockets and cores per socket of this virtual machine to generate another result for comparison.

Test VM Configuration #2 – 2 sockets by 12 cores per socket

VMTest2

In this configuration, while the virtual machine is still configured have a total of 24 logical processors, we manually intervened and configured 2 virtual sockets by 12 cores per socket. vNUMA will no longer automatically create the topology it thinks is best, but instead will respect this specific configuration and present only two virtual NUMA nodes as defined by our virtual socket count.

Within the Linux based virtual machine, we can validate what vNUMA presented to the guest operating system by using: numactl –hardware

TestVM2NUMA

Re-running the exact same micro-benchmark we get an execution time of 54 seconds.

TestVM2Result

This configuration, which resulted in a non-optimal virtual NUMA topology, incurred a 17% increase in execution time.

Test VM Configuration #3 – 1 socket by 24 cores per socket

TestVM3

In this configuration, while the virtual machine is again still configured have a total of 24 logical processors, we manually intervene and configured 1 virtual socket by 24 cores per socket. Again, vNUMA will no longer automatically create the topology it thinks is best, but instead will respect this specific configuration and present only one NUMA node as defined by our virtual socket count.

Within the Linux based virtual machine, we can validate what vNUMA presented to the guest operating system by using: numactl –hardware

TestVM3NUMA

Re-running the micro-benchmark one more time we get an execution time of 65 seconds.

TestVM3Result

This configuration, with yet a different non-optimal virtual NUMA topology, incurred a 31% increase in execution time.

To summarize, this test demonstrates that changing the corespersocket configuration of a virtual machine does indeed have an impact on performance in the case when the manually configured virtual NUMA topology does not optimally match the physical NUMA topology.

The Takeaway

Always spend a few minutes to understand your physical servers NUMA topology and leverage that when rightsizing your virtual machines.

Other Great References:

The CPU Scheduler in VMware vSphere 5.1

Check out VSCS4811 Extreme Performance Series: Monster Virtual Machines in VMworld Barcelona

vSphere Blog Details Replication Improvements in 5.5

From this vSphere Blog entry by Ken Warneburg:

The changes they’ve made fall into three main areas:

  1. Improved buffering algorithms at the source hosts resulting in better read performance with less load on the host and better network transfer performance
  2. More efficient TCP algorithms at the source site resulting in better latency handling
  3. More efficient buffering algorithms at the target site resulting in better write performance with less load on the host

Let’s look at each of these in a bit more detail:

Source improvements

The way blocks are queued up for transfer has changed slightly from the past iterations where TransferDiskMaxBufferCount & TransferDiskMaxExtentCount were the primary throttling mechanisms for reading and sending changed blocks.

Now, we use a global heap on each host to hold the blocks that have been identified to be sent.  We create a heap of appropriate size to hold blocks for potentially the maximum number of VMs on a host.  This is dynamically sized according to the host memory size and the maximum number of VMs on the host, but roughly it sizes to a bit more than 3MB of memory.  We then load that heap with changed blocks identified by the agent that tracks the blocks as they change.

The way the blocks are read into the heap is in essence by a round robin among the replicated disks, grabbing up to 16 of the changed blocks at a time, and there is a maximum number of IOPS set per host to 1024 to ensure this cyclical reading doesn’t overburden anything.

Blocks are then shipped from the heap to the VR Server on the remote site, with a maximum of 64 extents still “in-flight” that have not been acknowledged as written.  As those blocks come back acknowledged, the agent is free to send more from the heap.

The net result is that this is a much more efficient mechanism as we can load and send from a global heap rather than treating each VM as its own object.  Fundamentally this leads to a greater overall efficiency of the VR resource manager, and allows getting data to the VR Server faster.

TCP Changes

The TCP algorithms at the source site have been changed to using a CUBIC[3] based transport.  This is a fairly minor change, but has very good impact for use on long fat networks, as we see often on higher latency yet still high bandwidth connections that people often use for replication.  It uses much smarter means of determining factors like TCP window size based on accelerating probing over time and specifically looking at factors like time since last congestion event.  It will also size the TCP window independently of ACKs.

All around this makes things much more efficient for data sends across higher latency networks, where bandwidth is less an issue than RTT.

Recipient VR Appliance Changes

Vast improvements have been made to the way the vSphere Replication Appliance receives and writes out the changed blocks, by making some small but very clever adaptations:

The biggest change is by switching the way the appliance sends its writes to the disk with Network File Copy.  We now use buffered writes instead of direct IO.  Direct IO requires opening the target disk, writing an extent, waiting for write acknowledgement, moving on to the next write, etc.  Instead, with buffered writes, the VRA will open the target disk in buffered mode and write using NFC with a single sync flag at the end of the write.  In essence these are async writes with a sync write at the end of each ‘transaction’ with the disk. This is a considerably quicker way for VR to do NFC, with no penalty in host performance, and still maintaining data integrity.  This gets things to disk much quicker, and provides a huge leap in performance, as we can now acknowledge a whole bunch of writes with one transaction.

A further change is by using “coalescing buffers” to consolidate contiguous blocks on the appliance before performing a single NFC stream rather than doing each extent in isolation.  In 5.1 for example, if there are 128 contiguous 8k writes they would be sent as one NFC transaction, but would be issued as 128 writes to the kernel.  In 5.5 if they are contiguous blocks, they are coaslesced into a single write transaction that NFC issues to the kernel.  This provides less disk and host overhead, and again gets things to disk much quicker.

Coupling coalesced buffers with buffered writes and a larger amount of cached data gets much faster writes from the VR Appliance to the host’s target disk.

So that’s what has been changed to improve performance, but what can we now expect in terms of throughput?  Coming up soon in another blog post, I’ll have some sample data from my labs, and a few warnings about the impact of this.  As an anecdotal tease though, I’m seeing roughly 40Mbps for a single VM…

VMware Virtual SAN Beta Available

VMware has released the beta of Virtual SAN,

Full text taken from this post is below:

The VMware Virtual SAN™ beta is available now for download. To claim your copy, visit the VMware Virtual SAN Community page. Our VMware Virtual SAN Community is for MyVMware members only — to gain access, see steps at bottom of this email.*

Share Your Experience

We seek your feedback, positive or negative, about the VMware Virtual SAN Beta. Give feedback and ask questions via the discussion threads on the VMware Virtual SAN Community page. Get answers fast from our product experts.

Be sure to see the How-To Videos, Interactive Demos, Product Documentation, and FAQs that are also available at the VMware Virtual SAN Community.

Rewards for VMware Virtual SAN Community Members

We value your feedback! Take part in raffles and contests that reward engaged users in our VMware Virtual SAN Community. Enjoy giveaways such as iPads, Amazon gift cards and more. Stay tuned to the VMware Virtual SAN Community for more details.

Webinar: How to Install, Configure and Manage VMware Virtual SAN

Save time and gain valuable insight from our Senior Technical Marketing Architect, Cormac Hogan.  Webinar date:Wednesday, 2 October at 8:30 am PST. Link to the webinar will be available on the community’s website.

Thank you for your interest! We look forward to your feedback regarding the VMware Virtual SAN Beta from VMware.

The VMware Virtual SAN Team
Chat with us on Twitter: https://twitter.com/VMwareVSAN
Contact us: vsanbeta@vmware.com

* How to access the VMware Virtual SAN Beta community website

  1. Register for a My VMware account here (If you already have one skip to next step).
  2. Sign the terms of use here.
  3. Access the VMware Virtual SAN Community page.
  4. Bookmark that link so you can return and participate in our VMware Virtual SAN Community