Recently I had been trying to solve a problem with latency in a Microsoft 2012 R2 RDSH environment hosted on NetApp FAS storage and vSphere 5.5 Enterprise Plus. Storage was fast in terms of hard drive standards using 15k RPM SAS drives, and latency on the aggregate and volumes was relatively low. Although when connecting to RDSH any amount of latency is very noticeable.
NetApp has two different types of Flash technology to make use of intelligent caching. One is called Flash Cache which are PCIe cards that fit directly into the NetApp controllers and provide improved “read” performance. Flash Cache is a very good idea, but based on my metrics; I was seeing about 50/50 split between read and write latency. Another technology that NetApp offers is called Flash Pool and unlike Flash Cache, both read and writes can be cached on the Flash Pool SSDs. I went ahead and procured a brand new disk shelf with a combination of 10k RPM SAS drives and SSDs. The disk shelf was configured as a hybrid aggregate to take advantage of NetApp’s Flash Pool. Initial testing and benchmarking proved to have increased performance and lower latency. Once I was satisfied, I proceeded to Storage vMotion the RDSH environment to the new datastore. Users saw a much-improved performance at first, but as the environment grew and more VMs were added to this datastore, the latency increased and performance dropped.
Another idea that was new to vSphere 5.5 is the vSphere Flash Read Cache (vFRC). This seemed like a possible solution to at least reduce some of the read latency and was worth a shot. I went with a single 200 GB SSD for each ESXi host to keep the cost down since this is a new vSphere feature that we had not tested yet. Initial tests showed improvement and two Virtual Machines were upgraded to VM Version 10 and vFRC enabled. Soon afterwards I started seeing issues with Veeam backups either failing or VM Snapshots constantly needing consolidation. This was before Veeam v8 that has NetApp integration and the ability to take storage snapshots. After working with VMware and Veeam and not being able to find a resolution, I was forced to disabled vFRC.
Tintri Does “IT” Better
When it comes to high-performance flash storage, there are a lot of players in the market. It was no surprise to hear vendors tell me that an “All-Flash” array was necessary for my type of workload. While All-Flash would indeed solve the challenge I was having, was it the appropriate solution and does it make sense from a cost versus performance standpoint? The short answer was no. In this situation, a “Hybrid-Flash” array made the most sense. But you may be saying to yourself, we just spoke about NetApp’s hybrid solution that did not work out. So what makes Tintri different? Well, just about everything.
Let me just say, this is not Tintri versus NetApp. This is Tintri versus aging storage technologies that have seen very limited improvements over the years. These traditional storage arrays use file systems that were designed long before virtualization hit the market, and the same file systems are still used today. Tintri, on the other hand, has a brand new file system that was created from the ground up, offers per-VM queuing, QoS, and most importantly was designed specifically for virtualization. One of the reasons I learned about Tintri was from looking into VMware VVols for more VM-level management at the storage layer. Unfortunately, VVols is still in its infancy, and Tintri created VM-aware storage several years before the release of VVols.
But it is not just about who came up with the idea of VM-level management first. It is about how each technology was designed and implemented. As I said before, Tintri was built from the ground up and is purpose-built for virtualized environments. VVols is just an API and does not change the underlying storage architecture. Point being, not all storage systems are created equal and traditional storage systems cannot deliver such VM-level data services as they don’t fundamentally understand VMs and vDisks.
Enough with the comparisons. You just want to know how well the Tintri VMstore performs, and I can tell you it performs great! The RDSH environment that experienced latency now lives on a Tintri VMstore T820. Latency is consistently sub-millisecond, and the flash hit ratio is always 99-100%. The performance with the RDSH environment is so much improved that I can work more quickly when logging into the RDS Farm when compared to using my local laptop. But Tintri is not just an RDS or VDI solution. It works great with all types of workloads whether it be websites, databases, Exchange, and much more. If you are looking to increase performance and save money in your virtualized infrastructure, then you need to check out Tintri.
For more information about Tintri’s unique Operating System, visit https://www.tintri.com/resources/productinformation/tintri-operating-system-data-sheet.
Totally agree, we love the simplicity and speed from our T850’s
Sounds good. We are living with a traditional storage system for our 300 VMs. Planning on evaluating windows + jbod in sofs cluster. Compared to that, what additional value can Tintri bring along besides turn-key factor?