Its sits in a server farm so who cares. Although I must admit when IBM started making what appear to be batman inspired mainframes, like the z14, I thought they looked pretty damn cool in the server room.
Sustained transfer rate of 23GB/s? To what? This is double the capacity of the 100Gb/s fabrics that are just now entering common use. What are they using?
32GB/s fibre or 40GB/s iSCSI are referred to alongside 23GB/s in their data sheet, suspect WD are mixing GB and Gb, which is bizarre for a storage company. Hopefully the storage capacity isn't also a tenth of what they claiming!
Wonder why they went with Xeon CPUs for this. Will there be enough PCIe lanes available to actually use all that glorious NVMe throughput?
Although, I guess this just pushes the bottleneck to the network/fabric that connects this thing to the actual compute/processing nodes, so using PCIe switches to share PCIe lanes across NVMe devices won't be an issue.
Would be interesting to see how this would compare to an EPYC-based system, especially one that combines compute and storage into a single server, where all the extra PCIe lanes would really come in handy (no network/fabric to bottleneck you).
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
17 Comments
Back to Article
plopke - Friday, July 19, 2019 - link
i do not know why but the front of that gave me flashback to 80-90's desktop cases , why even give it a fancy designeva02langley - Friday, July 19, 2019 - link
They could have done something else design wise, I agree.DanNeely - Friday, July 19, 2019 - link
Same here, I got a vibe of Sun Workstation crossed with modern gamer styling.Icehawk - Saturday, July 20, 2019 - link
Aesthetics of a rack mount component that will be in a server room... yeah, super critical.spkay31 - Monday, July 22, 2019 - link
They must have been designed by old Sun product design engineers, hehehe.FreckledTrout - Saturday, July 20, 2019 - link
Its sits in a server farm so who cares. Although I must admit when IBM started making what appear to be batman inspired mainframes, like the z14, I thought they looked pretty damn cool in the server room.Arsenica - Friday, July 19, 2019 - link
Really ugly front panel. It makes me wonder if it was designed by answering the following question:"If the box were an animal what sort of animal would it be?"
ozzuneoj86 - Friday, July 19, 2019 - link
"... Entry-Level...""... 92TB NVMe..."
"... 16Gbps Fibre Channel..."
@_@
saf227 - Friday, July 19, 2019 - link
Yep! Just screams "average desk top user," doesn't it?FreckledTrout - Saturday, July 20, 2019 - link
Thats fairly entry level in the enterprise market. To me its fairly crazy to even think that 1.7M IOPS is entry level these days.Dug - Tuesday, July 30, 2019 - link
No, 1.7M IOPS is not entry level in enterprise market.Hamm Burger - Friday, July 19, 2019 - link
Sustained transfer rate of 23GB/s? To what? This is double the capacity of the 100Gb/s fabrics that are just now entering common use. What are they using?Wardrop - Friday, July 19, 2019 - link
Maybe that's combined over multiple interfaces?gfkBill - Saturday, July 20, 2019 - link
32GB/s fibre or 40GB/s iSCSI are referred to alongside 23GB/s in their data sheet, suspect WD are mixing GB and Gb, which is bizarre for a storage company.Hopefully the storage capacity isn't also a tenth of what they claiming!
Dug - Tuesday, July 30, 2019 - link
To itself. You aren't running one application off of this.YB1064 - Tuesday, July 23, 2019 - link
Are you guys going to review one of these?phoenix_rizzen - Thursday, July 25, 2019 - link
Wonder why they went with Xeon CPUs for this. Will there be enough PCIe lanes available to actually use all that glorious NVMe throughput?Although, I guess this just pushes the bottleneck to the network/fabric that connects this thing to the actual compute/processing nodes, so using PCIe switches to share PCIe lanes across NVMe devices won't be an issue.
Would be interesting to see how this would compare to an EPYC-based system, especially one that combines compute and storage into a single server, where all the extra PCIe lanes would really come in handy (no network/fabric to bottleneck you).