10 Gbit Ethernet just got even more attractive. You might remember from our last storage article that we have high hopes for iSCSI as a high performance but a very cost effective shared storage solution for SME's. Our hopes were  based on getting 10  Gbit (10 GBase-T) on UTP Cat 6 (or even  CAT5e) but unfortunately the only switch that I could find (thanks Renée!) that supports 10  Gbit this way  was the SMC TigerSwitch 10G. With pricing at  $1000 per port,  not really a budget friendly offering.

Still, 10 Gbit Ethernet is an incredibly interesting solution for a  virtualized server or an iSCSI storage array that is serving data to a lot of (virtualized or not) servers.
 
 
 
So maybe it is best to give optic cabling another look. Some of the 10  Gbit Ethernet  NICs are getting quite cheap these days, but an enthousiastic Ravi Chalaka, Vice President of Neterion  told us that  it might be wise to invest a bit more in NICs with  IOV (I/O virtualization) support. According to Neterion, the newest Neterion X3100 Series is the first adapter to support the new industry-standard, SR-IOV 1.0 (Single-Root  /O Virtualization.)  SR-IOV is a PCI-SIG workgroup extension to PCIe. One of the features of such a  NIC is that is has multiple channels  that can accept  multiple requests of  virtualized servers, which significantly reduces  the latency and overhead of  multiple servers sharing the same network I/O. Even more important is that the Neterion X3100 is natively supported in VMWare ESX 3.5.

 
 
We  will test the Neterion X3100 in the coming months.  It seems like a very promising product as Neterion claims :
 
  • 7 times more bandwidth
  • 50% less latency
  • 40 % less TCP overhead
Than a comparable 1 Gbit solution. So while many of us are probably quite pleased with the bandwidth of 2 GBit (2x 1 Gbit MPIO), especially 50% lower latency sounds great for iSCSI. Fibre Channel, which is moving towards 8 GBit, might just have lost another advantage...
 
 
 
 

 

Comments Locked

14 Comments

View All Comments

  • JohanAnandtech - Monday, March 3, 2008 - link

    Latency of storage devices almost always determines the reponse time and subjective user "feeling". iSCSI latency is not much higher than FC in our experience, but if you can shave up quite a few ms, it is a big win towards your endusers

    I have still to study IOV, but it basically boils down to the fact that there is no switching involved, that most of the TCP processing is in parallel. A kind of "superscalar" NIC, not a "multicore NIC"

  • Olaf van der Spek - Monday, March 3, 2008 - link

    quote:

    Latency of storage devices almost always determines the reponse time and subjective user "feeling". iSCSI latency is not much higher than FC in our experience, but if you can shave up quite a few ms, it is a big win towards your endusers

    I thought latency due to networking was already sub-ms?
    I don't see how you could shave 'quite a few' ms from that.
  • JohanAnandtech - Monday, March 3, 2008 - link

    You are right that a basic ping on a hardly used network line is a lot less than 1 ms.

    However, sending back a basic block of 64 KB for example, requires quite a few of ethernet packages (43 if you do not use Jumbo frames) before you got your first block back.

    http://it.anandtech.com/IT/showdoc.aspx?i=3147&...">http://it.anandtech.com/IT/showdoc.aspx?i=3147&...

    Look at the Random latency, up to 25 ms. And that is not only harddisk latency, See the DAS numbers (6 ms).




    And that is measured from one server direct
  • Phyllis Hershberger - Tuesday, May 15, 2018 - link

    Thank you for your post! if i can find some support on 10Gtek?

Log in

Don't have an account? Sign up now