Blade servers: server nirvana?

The idea behind blade servers is brilliant. Imagine that you need about twelve 1U servers and you need them to be very reliable, attached to the KVM, networked and managed out of band. The result is that you have 24 power supplies, 12 KVM cables, at least 24 Ethernet cables (1 management, one network) and we are not even counting the number of cables to external storage or other devices.

What if you could put all of these 12 servers in one 6-7U chassis that has 3 (2+1) or 4 (2+2) very big power supplies instead of 24 small ones, and let them use a shared network switch, KVM switch and management module? That is exactly what a blade server is: a 6U, 7U or sometimes 10U chassis that can hold about 8 to 14 hot-swappable, vertically placed "mini-servers" called blades.

A server blade is a complete server with one or two processors and associated memory, disk storage and network controllers. Each blade within a system chassis slides into a blade bay, much like hot swappable hard disks in a storage rack. By sliding a blade into the bay, you also connect it to a shared backplane which is the link to the power supply, DVD-ROM, Ethernet and/or Fibre Channel switches, KVM switch and so on.

Individual Blade and a Blade Chassis

It doesn't take a genius to see that a blade server chassis with blades can be a more interesting option than a lot of 1U servers. The blade server should be easier to manage, offer more processing power in less space, and cost less as many components can be shared instead of being replicated in each 1U servers. Who needs 12 DVD players, 12 different remote management modules, and 24 power supplies?

According to the four biggest players in the server world - namely Intel, IBM, HP and Dell - blade servers are the way to the new enlightenment, to Server Nirvana. And there is no doubt about it, blade servers are hot: blade server sales have increased quite spectacularly, by 40% and more last year. They are certainly a very promising part of the server market... for some server applications.

The big four see the following advantages:
  • Reductions in cable complexity
  • Operational cost savings
  • Data center space savings
  • Lower acquisition costs
  • Improved high availability
  • More efficient power usage
The promises that cable complexity would be reduced, management would be easier (in most cases), space would be saved, and that more processing power could be fit in the same rack space have all materialized. But as always it is important not to be swept away by all the hype.

At the end of 2003, one year after the introduction of the blade server, IDC predicted that "the market share for blade servers will grow to 27% of all server units shipped in 2007" [2]. Currently IDC estimates are that blades account for 5 to 7% of the server market (unit shipments), so you probably can't help but wonder how IDC ever arrived at 27% in 2007. But that doesn't stop IDC from predicting again: by 2010, 25% of the server market will be conquered by blade servers. The truth is that blade servers are not always the best solution and have quite a long way to go before they can completely replace rack servers.

Chassis Format More Blades and Conclusion
Comments Locked

32 Comments

View All Comments

  • Whohangs - Thursday, August 17, 2006 - link

    Yes, but multiply that by multiple cpus per server, multiple servers per rack, and multiple racks per server room (not to mention the extra cooling of the server room needed for that extra heat) and your costs quickly add up.
  • JarredWalton - Thursday, August 17, 2006 - link

    Multiple servers all consume roughly the same power and have the same cost, so you double your servers (say, spend $10000 for two $5000 servers) and your power costs double as well. That doesn't mean that the power catches up to the initial server cost faster. AC costs will also add to the electricity cost, but in a large datacenter your AC costs don't fluctuate *that* much in my experience.

    Just for reference, I worked in a datacenter for a large corporation for 3.5 years. Power costs for the entire building? About $40-$70,000 per month (this was a 1.5 million square foot warehouse). Costs of the datacenter construction? About $10 million. Costs of the servers? Well over $2 million (thanks to IBM's eServers). I don't think the power draw from the computer room was more than $1000 per month, but it might have been $2000-$3000 or so. The cost of over 100,000 500W halogen lights (not to mention the 1.5 million BTU heaters in the winter) was far more than the costs of running 20 or so servers.

    Obviously, a place like Novel or another company that specifically runs servers and doesn't have tons of cubicle/storage/warehouse space will be different, but I would imagine places with a $100K per month electrical bills probably hold hundreds of millions of dollars of equipment. If someone has actual numbers for electrical bills from such an environment, please feel free to enlighten.
  • Viditor - Friday, August 18, 2006 - link

    It's the cooling (air treatment) that is more important...not just the expense of running the equipment, but the real estate required to place the AC equipment. As datacenters expand, some quickly run out of room for all of the air treatment systems on the roof. By reducing heating and power costs inside the datacenter, you increase the value for each sq ft you pay...
  • TaichiCC - Thursday, August 17, 2006 - link

    Great article. I believe the article also need to include the impact of software when choosing hardware. If you look at some bleeding edge software infrastructure employed by companies like Google, Yahoo, and Microsoft, RAID, PCI-x is no longer important. Thanks to software, a down server or even a down data center means nothing. They have disk failures everyday and the service is not affected by these mishaps. Remember how one of Google's data center caught fire and there was no impact to the service? Software has allowed cheap hardware that doesn't have RAID, SATA, and/or PCI-X, etc to function well and no down time. That also means TCO is mad low since the hardware is cheap and maintenance is even lower since software has automated everything from replication to failovers.
  • Calin - Friday, August 18, 2006 - link

    I don't thing google or Microsoft runs their financial software on a big farm of small, inexpensive computers.
    While the "software-based redundancy" is a great solution for some problems, other problems are totally incompatible with it.
  • yyrkoon - Friday, August 18, 2006 - link

    Virtualization is the way of the future. Server admins have been implimenting this for years, and if you know what you're doing, its very effective. You can in effect segregate all your different type of servers (DNS, HTTP, etc) in separate VMs, and keep multiple snapshots just incase something does get hacked, or otherwise goes down (not to mention you can even have redundant servers in software to kick in when this does happen). While VMWare may be very good compared to VPC, Xen is probably equaly as good by comparrison to VMWare, the performance difference last I checked was pretty large.

    Anyhow, I'm looking forward to anandtechs virtualization part of the article, perhaps we all will learn something :)
  • JohanAnandtech - Thursday, August 17, 2006 - link

    Our focus is mostly on the SMBs, not google :-). Are you talking about cluster fail over? I am still exploring that field, as it is quite expensive to build it in the lab :-). I would be interested in what would be the most interesting technique, with a router which simply switches to another server, or with a heartbeat system, where one server monitors the other.

    I don't think the TCO is that low for implementing that kind of software or solutions, and that hardware is incredibly cheap. You are right when you are talking about "google datacenter scale". But for a few racks? I am not sure. Working with budgets of 20.000 Euro and less, I 'll have to disagree :-).

    Basically what I am trying to do with this server guide is give the beginning server administrators with tight budgets an overview of their options. Too many times SMBs are led to believe they need a certain overhyped solution.
  • yyrkoon - Friday, August 18, 2006 - link

    Well, if the server is in house, its no biggie, but if that server is acrossed the country (or world), then perhaps paying extra for that 'overhyped solution' so you can remotely access your BIOS may come in handy ;) In house, alot of people actually use in-expencive motherboards such as offered by Asrock, paired with a celeron / Sempron CPU. Now, if you're going to run more than a couple of VMs on this machine, then obviously you're going to have to spend more anyhow for multiple CPU sockets, and 8-16 memory slots. Blade servers IMO, is never an option. 4,000 seems awefully low for a blade server also.
  • schmidtl - Thursday, August 17, 2006 - link

    The S in RAS stands for sevicability. Meaning when the server requires maintainance, repair, or upgrades, what is the impact? Does the server need to be completely shut down (like a PC), or can you replace parts while it's running (hot-pluggable).
  • JarredWalton - Thursday, August 17, 2006 - link

    Thanks for the correction - can't say I'm a server buff, so I took the definitions at face value. The text on page 3 has been updated.

Log in

Don't have an account? Sign up now