- asynchronous data (file) replication
- asynchronous volume replication
- data replication
- storage performance
- vmware esxi 5.0
- vmware esxi server
The challenges facing cloud hosting providers
As we all notice – cloud services are becoming more, and more popular, and of course there is nothing strange in it. A significantly lower cost, the elimination for the need to invest in your own expensive software and hardware equipment and finally increasing the mobility of employees who now have an ability to work effectively no matter where they are located. The only thing they need is internet access… and, of course, access to the cloud. This sudden increase in the popularity of cloud hosting, and thus the amount of customers, entails the need to provide High Performance Random - but in an inexpensive way so that it will still be possible to offer an attractive price for providing a valuable service.
That is why one of the main concerns of cloud services providers is the cost they need to bare for services offered for free or at a very low cost. Mostly the business model is based on very low monthly subscription prices but deal with thousands of customers. In many cases the services require large capacity like tens or hundreds of TB's and need to serve thousands of customers at the same time. So a large number of users to be served simultaneously needs a very high random IO performance.
Currently there is no problem to build a high capacity storage appliance at cost level while staying within budget. High capacity SATA drives and High Storage Capacity enclosure, up to 48 HDD, make it possible, like for example Supermicro.
The problem with SATA drives is poor random-IO performance which makes it impossible to serve thousands of users simultaneously at the acceptable performance level. Such huge numbers of users online generates 100% random IO access patterns and the total RAID performance with SATA drives will show an unacceptable low performance. The problem is described in our blog post.
What is the solution to poor random IO problems?
The answer is of course - faster hard disks. Fast 15k RPM SAS HDD show random seek time at about 3ms. Slow 7200 RPM SATA HDD show random seek time at about 9ms. So, by using fast SAS drives we can improve random performance by 3 times. This is NOT enough! Currently SSD drives are recommended as a solution for demanding random IO applications. SSD are about 10 times faster in the case of random IO performance compared to 15k SAS HDD. The only problem is the price. If you want to offer fast random IO and high capacity, and stay within your budget, you need to consider SSD cached SATA RAID array. Data Storage Software V6 supports two available SSD cache solutions. The first is Adaptec maxCache which offer read and write caching and promise applications improvement up to 13 times. The second MegaRaid CacheCade PRO from LSI offers dynamic read and write cache and significantly improved IO performance.
Some benchmarks made with CacheCade with single SSD show over 40.000 IOPS and with two SSD even over 80.000 IOPS (see interesting test 1 and test 2 results). This is considerably higher than a few hundred IOPS that the HDD array by itself can perform. Please note, the SSD drives have a very limited write endurance compared to regular HDD. But there are SDD which offer ultra-high endurance with even 35 full drive writes per day. Anyway we all aware that every solution has its drawbacks, but isn't that still a small cost considering a spectacular performance which we can gain?
Trademarks: The names of actual companies and products mentioned herein may be the trademarks of their respective owners.