I/O Meter

IOMeter is an open source (originally developed by Intel) tool that can measure I/O performance in a large variety of ways: random, sequential, or a combination of the two; read, write, or a combination of the two; in blocks from a few KB to several MB, etc. IOMeter can generate the workload and measure how fast the I/O system performs. To see how close we can get to the limits of our interface, we did one test with RAID 0.

Sequential I/O Meter tests - 8 disks in RAID 0

Considering that one of our Fujitsu 15000 RPM SAS disks can do a bit more than 90MB/s, 718MB/s is about the maximum performance we can expect. The iSCSI and FC Channel come close to maxing out their interface, 125MB/s and 400MB/s respectively.

Using RAID 0 for running databases is not a good practice, given the potential for data loss, so from now on we'll test with RAID 5. The next test is the fastest way to access the database: reading sequentially.

Sequential I/O Meter tests RAID 5

Again, the Intel IOP333 in our DAS does not let us down. Seven striped disks should achieve about 630MB/s, and our DAS configuration comes very close to this theoretical maximum. The interface speed bottlenecks the other setups. Only the StarWind target starts to shown that it's a lower performance offering.

If we read randomly, the disk array has to work a lot harder.

I/O Meter RAID 5 Random Read (64KB)

The iSCSI StarWind target gives up, as it cannot cope with a random access pattern. It performed rather badly in other tests too. To reduce the number of tests, we did not include this iSCSI target in further testing.

This is where the iSCSI SLES target shines, delivering performance that is equal to DAS and FC setups. With random accesses, it is little surprise that the larger cache of the Promise VTRAK doesn't help. However, we would have expected a small boost from the newer Intel IOP341 used in the Promise Appliance.

Some applications write a lot to the disks. What happens if we do nothing but writing sequentially or randomly?

I/O Meter RAID 5 Sequential Write (64KB)

The large 512MB cache of the Promise VTRAK E310f pays off: it is capable of writing almost at the maximum speed that its 4Gb/s interface allows. The smaller 128MB cache that we find on the controller of our Intel SSR212MC2 is about 15% slower. Microsoft's iSCSI is about 38% faster in sequential writes than the iSCSI target that comes with SUSE's SLES 10 SP1.

I/O Meter RAID 5 Random Write (64KB)

A similar advantage for the Microsoft iSCSI target exists in the random write benchmark. The way Microsoft's initiator sends off the blocks to the iSCSI target is apparently helping in this type of test. The VTRAK E310f is the winner again. This is clearly not a result of its faster interface, but probably a consequence of the newer Intel IOP processor.

An OLTP database and other applications will probably do a mix of both reading and writing, so a benchmark scenario with 66% reading and 33% writes is another interesting test.

I/O Meter RAID 5 Sequential R/W 66%/33% (64KB)

I/O Meter RAID 5 Random R/W 66%/33% (64KB)

In this case, the Linux iSCSI target is about 20% faster than the Microsoft iSCSI target. The Linux iSCSI target is quicker in random reads, mixing reads with writes but a lot slower than the Microsoft target when doing nothing else but writing. It will be interesting to research this further. Does Linux have a better I/O system than Windows, especially for reads, or is the SLES iSCSI target not optimized well enough for writing? Is using a Microsoft initiator a disadvantage for the Linux iSCSI target? These questions are out of the scope of this article, but they're interesting nonetheless.

The Promise VTRAK E310f has won most of the benchmarks thanks to the larger cache and newer IOP processor. We'll update our benchmarks as soon as we can use a newer RAID controller in our Intel system based on the IOP341.
Configuration and benchmarking setup SQLIO
Comments Locked

21 Comments

View All Comments

  • Lifted - Wednesday, November 7, 2007 - link

    quote:

    We have been working with quite a few SMEs the past several years, and making storage more scalable is a bonus for those companies.


    I'm just wondering this sentence was linked to an article about a Supermicro dual node server. So you considere Supermicro an SME, or are you saying their servers are sold to SME's? I just skimmed the Supermicro article, so perhaps you were working with an SME in testing it? I got the feeling from the sentence that you meant to link to an article where you had worked with SME's in some respect.
  • JohanAnandtech - Wednesday, November 7, 2007 - link

    no, Supermicro is not an SME in our viewpoint :-). Sorry, I should have been more clear, but I was trying to avoid that the article lost it's focus.

    I am head of a serverlab in the local university and our goal is applied research in the fields of virtualisation, HA and Server sizing. One of the things we do is to develop software that helps SME's (with some special niche application) to size their server. That is what the link is going towards, a short explanation of the stresstesting client APUS which has been used to help quite a few SMEs. One of those SMEs is MCS, a software company who develops facility management software. Basically the logs of their software were analyzed and converted by our stresstesting client into a benchmark. Sounds a lot easier than it is.

    Because these applications are used in real world, and are not industry standard benchmarks that the manufacturers can tune to the extreme, we feel that this kind of benchmarking is a welcome addition to the normal benchmarks.
  • hirschma - Wednesday, November 7, 2007 - link

    Is the Promise gear compatible with Cluster File Systems like Polyserve or GFS? Perhaps the author could get some commentary from Promise.
  • JohanAnandtech - Wednesday, November 7, 2007 - link

    We will. What kind of incompatibility do you expect? It seems to me that the filesystem is rather independent from the storage rack.
  • hirschma - Thursday, November 8, 2007 - link

    quote:

    We will. What kind of incompatibility do you expect? It seems to me that the filesystem is rather independent from the storage rack.


    I only ask because every cluster file vendor suggests that not all SAN systems are capable of handling multiple requests to the same LUN simultaneously.

    I can't imagine that they couldn't, since I think that cluster file systems are the "killer app" of SANs in general.
  • FreshPrince - Wednesday, November 7, 2007 - link

    I think I would like to try the intel solution and compare it to my cx3...
  • Gholam - Wednesday, November 7, 2007 - link

    Any chance of seeing benchmarks for LSI Engenio 1333/IBM DS3000/Dell MD3000 series?
  • JohanAnandtech - Wednesday, November 7, 2007 - link

    I am curious why exactly?

    And yes, we'll do our best to get some of the typical storage devices in the labs. Any reason why you mention these one in particular (besides being the lower end of the SANs)
  • Gholam - Thursday, November 8, 2007 - link

    Both Dell and IBM are aggressively pushing these in the SMB sector around here (Israel). Their main competition is NetApp FAS270 line, which is considerably more expensive.
  • ninjit - Wednesday, November 7, 2007 - link

    It's a good idea to define all your acronyms the first time you use them in an article.
    Sure, a quick google told me what an SME was, but it's helpful to the casual reader, who would otherwise be directed away from your page.

    What's funny, is that you were particular about defining FC, SAN, HA on the first page, just not the title term of your article.

Log in

Don't have an account? Sign up now