Promise VTRAK

The Promise VTRAK E310f 2U RBOD has room for up to 12 hot-swappable SAS and SATA hard drives with support for RAID levels 0, 1, 5, 6, 1E and 50. Capacity scalability is not a problem: each RBOD supports up to four additional RBODs, which allows up to 60 hard drives. That is about 18 TB of 15000 RPM 300GB SAS disks or 45TB of 750GB SATA disks. If that is not enough, the 3U VTE610fD will let you use 16 drives and in combination with 3U JBODs, allowing up to 80 drives.

As long as you keep your FC SAN down to a few switches, it should be easy to set up and maintain. With 16-port FC switches, the cost of your SAN should stay reasonable and still give you a huge amount of storage capacity depending on how many servers need to access to the SAN. That is exactly the power of using FC: a few hundred TB of storage capacity is possible. The E310f also supports two 4Gbps FC host ports per controller and two controllers per storage rack, making "dual path" configurations possible. This in turn makes load balancing and failover possible, but not without drivers which understand that there are multiple paths to the same target. Promise does have drivers ready for Windows (based on the MPIO driver development kit) and is working on Linux Multi Path drivers.

The heart of the Promise VTRAK E310f RBOD is the Intel IOP341 CPU (one core). This system-on-a-chip I/O processor is based on the low-power XScale architecture and runs at 1.2 GHz. It has a rather large 512KB L2 for an embedded chip. The XScale chip provides "pure hardware" RAID, including support for RAID 6. RAID 0, 1, 1E, 5, 10, 50, 60 are also supported. Promise by default equips the E310f with 512MB cache of 533 MHz DDR2 (expandable to a maximum 2GB).


Each RBOD can use a dual active/active controller configuration (with failover/failback), or it can use a cheaper single controller configuration.

Intel SSR212MC2: ultra flexible platform

Whether you want an iSCSI target, a NAS, an iSCSI device that used as a NAS fileserver, RBOD, or just a simple JBOD, you can build it with the Intel SSR212MC2. If you want to use it as an iSCSI device, you have several options:
  • Using software like we did. You install SUSE SLES on the internal 2.5" SATA/SAS hard disk and make sure that the iSCSI daemon runs as soon as the machine is booted. If you are an OEM, you buy Microsoft's Windows 2003 Storage Server with Microsoft iSCSI target, or other third party iSCSI targets.
  • Using a SATA, IDE or USB Disk on Module (DOM). If you don't like to administer a full OS, just buy a minimal one on a flash module that attaches to your IDE/USB/SATA connector with a converter which makes it pretend to be disk.


The superbly flexible 2U chassis contains a S5000PSL server board with support for two dual-core (51xx) or quad-core (53xx) Intel Xeon CPUs. In the front are twelve SATA/SAS hard disk bays controlled by the Intel RAID SRCSAS144E controller with 128MB of ECC protected DDR-400 RAM. This controller uses the older Intel IOP333 processor running at 500MHz. That was a small disappointment, as by the time the SSR212MC2 launched the more potent IOP341 was available at speeds up to 1.2 GHz. This chip not only offers a higher clock, but it has a lot more internal bandwidth (6.4GBs vs. 2.7GB/s) and supports hardware enabled RAID 6. Intel's manual claims that a firmware update will enable RAID 6, but we fear that the 500MHz IOP333 might be slightly underpowered to perform RAID 6 quickly. (We'll test this in a later article.) Of course, nothing stops the OEM or you from using a different RAID card.

The S5000PSL provides dual Intel PRO/1000 gigabit Ethernet connections. As it allows you to have up to eight cores in your storage server, you can use this storage server as a regular server performing processing intensive tasks at the same time.


A single or dual redundant 1+1 850W power supply keeps everything powered while 10 hot-swappable fans keep everything cool. If you want to turn this into a simple JBOD, you can buy the SSR212MC2 without the motherboard. The highly integrated VSC410 controller on the enclosure management card works together with the SAS expander (PMC-Sierra PM8388 SXP) to offer 24 ports. You can daisy chain another JBOD onto the first one.
Pricing, Continued Configuration and benchmarking setup
Comments Locked

21 Comments

View All Comments

  • Lifted - Wednesday, November 7, 2007 - link

    quote:

    We have been working with quite a few SMEs the past several years, and making storage more scalable is a bonus for those companies.


    I'm just wondering this sentence was linked to an article about a Supermicro dual node server. So you considere Supermicro an SME, or are you saying their servers are sold to SME's? I just skimmed the Supermicro article, so perhaps you were working with an SME in testing it? I got the feeling from the sentence that you meant to link to an article where you had worked with SME's in some respect.
  • JohanAnandtech - Wednesday, November 7, 2007 - link

    no, Supermicro is not an SME in our viewpoint :-). Sorry, I should have been more clear, but I was trying to avoid that the article lost it's focus.

    I am head of a serverlab in the local university and our goal is applied research in the fields of virtualisation, HA and Server sizing. One of the things we do is to develop software that helps SME's (with some special niche application) to size their server. That is what the link is going towards, a short explanation of the stresstesting client APUS which has been used to help quite a few SMEs. One of those SMEs is MCS, a software company who develops facility management software. Basically the logs of their software were analyzed and converted by our stresstesting client into a benchmark. Sounds a lot easier than it is.

    Because these applications are used in real world, and are not industry standard benchmarks that the manufacturers can tune to the extreme, we feel that this kind of benchmarking is a welcome addition to the normal benchmarks.
  • hirschma - Wednesday, November 7, 2007 - link

    Is the Promise gear compatible with Cluster File Systems like Polyserve or GFS? Perhaps the author could get some commentary from Promise.
  • JohanAnandtech - Wednesday, November 7, 2007 - link

    We will. What kind of incompatibility do you expect? It seems to me that the filesystem is rather independent from the storage rack.
  • hirschma - Thursday, November 8, 2007 - link

    quote:

    We will. What kind of incompatibility do you expect? It seems to me that the filesystem is rather independent from the storage rack.


    I only ask because every cluster file vendor suggests that not all SAN systems are capable of handling multiple requests to the same LUN simultaneously.

    I can't imagine that they couldn't, since I think that cluster file systems are the "killer app" of SANs in general.
  • FreshPrince - Wednesday, November 7, 2007 - link

    I think I would like to try the intel solution and compare it to my cx3...
  • Gholam - Wednesday, November 7, 2007 - link

    Any chance of seeing benchmarks for LSI Engenio 1333/IBM DS3000/Dell MD3000 series?
  • JohanAnandtech - Wednesday, November 7, 2007 - link

    I am curious why exactly?

    And yes, we'll do our best to get some of the typical storage devices in the labs. Any reason why you mention these one in particular (besides being the lower end of the SANs)
  • Gholam - Thursday, November 8, 2007 - link

    Both Dell and IBM are aggressively pushing these in the SMB sector around here (Israel). Their main competition is NetApp FAS270 line, which is considerably more expensive.
  • ninjit - Wednesday, November 7, 2007 - link

    It's a good idea to define all your acronyms the first time you use them in an article.
    Sure, a quick google told me what an SME was, but it's helpful to the casual reader, who would otherwise be directed away from your page.

    What's funny, is that you were particular about defining FC, SAN, HA on the first page, just not the title term of your article.

Log in

Don't have an account? Sign up now