Random Read Performance is Also Affected

It’s not all about peak bandwidth either. Remember that bandwidth and latency are related, so it’s not all too surprising that the setups that delivered the least amount of bandwidth, also hurt small file read speed.

The target here is around 80MB/s. That’s what Intel’s X58 can do off one of its native 3Gbps SATA ports. Let’s see how everything else fares:

At 80MB/s the Crucial RealSSD C300 is pushing roughly 20,000 IOPS in this test. The highest random read speed of any MLC SSD we’ve ever tested in fact. With the 890GX the C300 can only manage 64.3MB/s.

Naturally I shared my data with AMD before publishing, including my Iometer test scripts. Running on its internal 890GX test platform, AMD was able to achieve a 4KB random read speed of 102.6MB/s in this test - faster than anything I’d ever tested. Unfortunately that appears to be using AMD’s own internal reference board and not one of the publicly available 890GX platforms. The good news is that if AMD’s numbers are accurate, there is hope for 890GX’s SATA performance. It’s just a matter of getting the 3rd party boards up to speed (AMD has since shared some more results with me that show performance with some beta BIOSes on 3rd party boards improving even more).

Using the Marvell 6Gbps controller in any PCIe 2.0 slot (or off a PCIe 2.0 interface as is the case with Gigabyte’s X58), or in one of ASUS’ 6Gbps ports behind the PLX switch, yields peak performance more or less.

Any of the PCIe 1.0 slots however saw a drop from ~80MB/s to ~65MB/s. The exception being Intel’s odd x4 slot that is a PCIe 1.0 slot, but branches off the X58 IOH and thus appears to offer lower latency than PCIe 1.0 slots dangling off the ICH.

The First Test: Sequential Read Speed Write Performance Isn’t Safe Either
Comments Locked

57 Comments

View All Comments

  • vol7ron - Thursday, March 25, 2010 - link

    It would be extremely nice to see any RAID tests, as I've been asking Anand for months.

    I think he said a full review is coming, of course he could have just been toying with my emotions.
  • nubie - Thursday, March 25, 2010 - link

    Is there any logical reason you couldn't run a video card with x15 or x14 links and send the other 1 or 2 off to the 6Gbps and USB 3.0 controllers?

    As far as I am concerned it should work (and I have a geforce 6200 modified to x1 with a dremel that has been in use for the last couple years).

    Maybe the drivers or video bios wouldn't like that kind of lane splitting on some cards.

    You can test this yourself quickly by applying some scotch tape over a few of the signal pairs on the end of the video card, you should be able to see if modern cards have any trouble linking at x9-x15 link widths.
  • nubie - Thursday, March 25, 2010 - link

    Not to mention, where are the x4 6Gbps cards?
  • wiak - Friday, March 26, 2010 - link

    the marvell chip is a pcie 2.0 x1 chip anyway so its limited to that speed regardless of interface to motherboard

    atleast this says so
    https://docs.google.com/viewer?url=http://www.marv...">https://docs.google.com/viewer?url=http..._control...

    same goes for USB 3.0 from NEC, its also a PCIe 2.0 x1 chip
  • JarredWalton - Thursday, March 25, 2010 - link

    Like many computer interfaces, PCIe is designed to work in powers of two. You could run x1, x2, x4, x8, or x16, but x3 or x5 aren't allowable configurations.
  • nubie - Thursday, March 25, 2010 - link

    OK, x12 is accounted for according to this:

    http://www.interfacebus.com/Design_Connector_PCI_E...">http://www.interfacebus.com/Design_Connector_PCI_E...

    [quote]PCI Express supports 1x [2.5Gbps], 2x, 4x, 8x, 12x, 16x, and 32x bus widths[/quote]

    I wonder about x14, as it should offer much greater bandwidth than x8.

    I suppose I could do some informal testing here and see what really works, or maybe do some internet research first because I don't exactly have a test bench.
  • mathew7 - Thursday, March 25, 2010 - link

    While 12x is good for 1 card, I wonder how feasible would 6x do for 2 gfx cards.
  • nubie - Thursday, March 25, 2010 - link

    Even AMD agrees to the x12 link width:

    http://www.amd.com/us-en/Processors/ComputingSolut...">http://www.amd.com/us-en/Processors/Com.../0,,30_2...

    Seems like it could be an acceptable compromise on some platforms.
  • JarredWalton - Thursday, March 25, 2010 - link

    x12 is the exception to the powers of 2, you're correct. I'm not sure it would really matter much; Anand's results show that even with plenty of extra bandwidth (i.e. in a PCIe 2.0 x16 slot), the SATA 6G connection doesn't always perform the same. It looks like BIOS tuning is at present more important than other aspects, provided of course that you're not an x1 PCIe 1.0.
  • iwodo - Thursday, March 25, 2010 - link

    Well we are speaking in terms of Gfx, with So GFX card work instead of 16x, work in 12x. Or even 10x. Thereby saving IO space,just wondering what are the status of PCI-E 3.0....

Log in

Don't have an account? Sign up now