iSCSI Performance Myths Explained

Saw this on twitter today:  iSCSi is a dog

cloudcomp_group iSCSI is a dog only when you’re not doing some sort of hardware offload or when you need high throughput. There are many many applications that do just fine with Sure, but you lose a significant amount of storage in the interim. > With nominal capacities running in the 146gb range (for good > drives…not failure-prone knockoffs), you would require 10x’s the > amount of drives to git the same usable capacity of a 1.4tb SATA drive > (which, btw, would consume less aggregate power than those SSDs). > > The scale is not there. >

Time to clear up a couple of myths about iSCSI:

  1. iSCSI is 1Gb only
  2. iSCSI uses SATA drives exclusively
  3. iSCSI is slow

Point 1: 1Gb vs multi-1Gb vs 10Gb:  It is pretty save to say any higher-end enterprise class IP SAN offers multiple gigabit ethernet ports for redundancy and performance.  In our case, all of our IP SANs have at least 3 connection — even at the low end.   Putting the theoretical performance right at the 4Gb fibre channel.  Many vendors (like us) also have 10Gb IP SANs.  These SANs are obviously much faster than the previous generation 1Gb SANs.  Some of the 10Gb SANS even offer dual 10Gb connections.  So iSCSI s only as fast as the server feeding it data.

Point 2: iSCSI uses SATA disk only.  First and second generation iSCSI solutions utilized SATA disk exclusively for cost savings and ease of deployment.  Some vendors even offered tiered storage with FC or SCSI disk and SATA in the early days.   (We offered multiple levels of disk starting in about 2005).  With the advent of SAS disk, most enterprise focused vendors offer SAS and SATA disk in the same chassis for additional performance with transactional data, and tiering of storage so IP SANs can be used for multiple applications:  email, backup and heavy duty processing.  iSCSI is a great fit for most applications, but the absolutely heaviest transactional data.  (Think NYSE).

Point 3:  iSCSI is slow?  You haven’t been to our labs, we are seeing performance in excess of 900MB/s with 10Gb, in a single IP SAN.  Wow, I think that’s pretty fast.

Picking out storage is a lot like choosing a car.  All of them get you places, and basically handle the same things, but when you go offroading, you might want to rethink bringing your Honda Civic.  One size fits all doesn’t work for cars, or storage.  And there are many options depending on what you need.

Need more help on picking out an IP SAN solution?

This entry was posted in 10GbE, datacenter, IP SAN, iSCSI, StoneFly, Storage, virtual servers and tagged , , , . Bookmark the permalink.

6 Responses to iSCSI Performance Myths Explained

  1. Tim Ingersoll says:

    I find it hard to believe that you can claim 900 MB/sec performance for 10GB/sec if not impossible. Maybe for a controller environment where the data is already cache you might see this, I challenge you to drop this into anyone normally IT datacenter, networking environment and send out data to clients at this rate a substained period of even 15 minutes.

  2. Ken Friend says:

    Beleive it. The Stonefly Voyager IP SAN appliance is designed for high availability and high performance datacenter requirements. Our patented virtualization engines process IO exceptionally fast so there is no need to use advanced cache algorithms to spoof performance not to mention this is not how our storage virtualization technology is designed to work. Our testing is always done using industry standard IOMETER running on one to four host servers connected via 10Gb SFP+ copper Arista network even though our customers have many copper and glass 10GB connectivity options (CX4, SFP+ or XFP) to choose from. Clearly we use a straight forward test environment using straight forward test parameters. Our engineering labs tests for sustainable SAN performance and tests are run between 12 and 24 hours whereas our product regression and sustainability labs run sustaining tests for months on end. Stonefly SAN deliverables are performance tested for overall sustainable SAN performance and resultant host file system performance will vary depending on the host hardware and OS capabilities.

    The 900MB/second data published in this blog posting is old news (February 09). Today we see the actual read and write performance on the Stonefly Voyager IP SAN exceed 1,400MB/sec across 32 SAS drives and next Voyager release (Q1 2010) we anticipate performance to increase by 30% over our today’s numbers. Sustainable MB/second results are nice but clearly not as important as IOPS. Our IO results from 4k -32k random read/write tests are unparalleled making our Voyager very popular for customers running large database cluster and mail cluster environments. The beauty of our SAN architecture and OS is they offer our customers many options to mix SSD, SAS, and SATA drives enabling storage administrators to virtually provision those storage assets to hosts based on application performance or capacity demand. That means we can cover both small block random IO needs and large block sequential MB/second needs for any datacenter. I understand the standard storage industry publications are not interested in 10Gb IP SAN bake offs this year or next (looking for any available), but we are always looking to showcase our IP SAN performance in any host environment side by side against any competitor. If you know of an upcoming 10Gb IP SAN bake off, please let us know!

    • John Verio says:

      1400MB/sec on a single 10Gb link? Uhm. 10Gb == 1250 MB. You would need at least 11.2Gb to get 1400MB. is that Mb or are you utilizing multiple links, or are you just lying?

  3. Pieter says:

    Sales contact email for Voyager outside US ?