AMD's Radeon HD 6990: The New Single Card King
by Ryan Smith on March 8, 2011 12:01 AM EST- Posted in
- AMD
- Radeon HD 6990
- GPUs
The launch drivers for the 6990 will be a preview version of Catalyst 11.4, which have been made available today and the final version launching sometime in April. Compared to the earlier drivers we’ve been using performance in most of our games is up by at least a few percent, particularly in CrossFire. For launching a dual-GPU card like the 6990, the timing couldn’t be better.
Along with these performance improvements AMD is also throwing a few new features in to the Catalyst Control Center, making it the first time they’ve touched it since the introduction of the new design in January. Chief among these features – and also timed to launch with the 6990 today - is 5x1 portrait Eyefinity mode. Previously AMD has supported 3x1 and 3x2, but never anything wider than 3 monitors (even on the Eyefinity 6 series).
The 6990 is of course perfectly suited for the task as it's able to drive 4 + 1 monitors without any miniDP MST hubs, and indeed the rendering capabilities of this card are wasted a good deal of the time only driving one monitor. Other cards will also support 5x1P, but only E6 cards can work without a MST hub at the moment. Notably, in spite of requiring one fewer monitor than 3x2 Eyefinity this is easily the most expensive option for Eyefinty yet, as portrait modes require monitors with wide vertical viewing angles to avoid color washout – you’d be hard pressed to build a suitable setup with cheap TN monitors like you can the landscape modes.
The other big change for power users is that AMD is adding a software update feature to the Catalyst Control Center, which will allow users to check for driver updates from within the CCC. It will also have an automatic update feature, which will check for driver updates every 2 weeks. At this point there seems to be some confusion over at AMD over whether this will be enabled by default or not – our drivers have it enabled by default, while we were initially told it would be disabled. From AMD’s perspective having the auto update feature enabled improves the user experience by helping to get users on newer drivers that resolve bugs in similarly new games, but at the same time I could easily see this backfiring with users by being one more piece of software nagging for an update every month.
Finally, AMD is undergoing a rebranding (again), this time for the Catalyst Control Center. If you use an AMD CPU + AMD consumer GPU, the Catalyst Control Center is now the AMD VISION Engine Control Center. If you use an Intel CPU + AMD consumer GPU it’s still the Catalyst Control Center. If you use a professional GPU (regardless of CPU), it’s the Catalyst Pro Control Center.
The Test
Due to the timing of this launch we haven’t had an opportunity to do in-depth testing of Eyefinity configurations. We will be updating this article with Eyefinity performance data in the next day. In the meantime we have our usual collection of single monitor tests.
CPU: | Intel Core i7-920 @ 3.33GHz |
Motherboard: | Asus Rampage II Extreme |
Chipset Drivers: | Intel 9.1.1.1015 (Intel) |
Hard Disk: | OCZ Summit (120GB) |
Memory: | Patriot Viper DDR3-1333 3 x 2GB (7-7-7-20) |
Video Cards: |
AMD Radeon HD 6990 AMD Radeon HD 6970 AMD Radeon HD 6950 2GB AMD Radeon HD 6870 AMD Radeon HD 6850 AMD Radeon HD 5970 AMD Radeon HD 5870 AMD Radeon HD 5850 AMD Radeon HD 5770 AMD Radeon HD 4870X2 AMD Radeon HD 4870 NVIDIA GeForce GTX 580 NVIDIA GeForce GTX 570 NVIDIA GeForce GTX 560 Ti NVIDIA GeForce GTX 480 NVIDIA GeForce GTX 470 NVIDIA GeForce GTX 460 1GB NVIDIA GeForce GTX 460 768MB NVIDIA GeForce GTS 450 NVIDIA GeForce GTX 295 NVIDIA GeForce GTX 285 NVIDIA GeForce GTX 260 Core 216 |
Video Drivers: |
NVIDIA ForceWare 262.99 NVIDIA ForceWare 266.56 Beta NVIDIA ForceWare 266.58 AMD Catalyst 10.10e AMD Catalyst 11.1a Hotfix AMD Catalyst 11.4 Preview |
OS: | Windows 7 Ultimate 64-bit |
130 Comments
View All Comments
Spazweasel - Tuesday, March 8, 2011 - link
I've always viewed single-card dual-GPU cards as more of a packaging stunt than a product.They invariably are clocked a little lower than the single-GPU cards they re based upon, and short of a liquid cooling system are extremely noisy (unavoidable when you have twice as much heat that has to be dissipated by the same sized cooler as the single-GPU card). They also tend to not be a bargain price-wise; compare a dual-GPU card versus two of the single-GPU cards with the same GPU.
Personally, I would much rather have discrete GPUs and be able to cool them without the noise. I'll spend a little more for a full-sized case and a motherboard with the necessary layout (two slots between PCI-16x slots) rather than deal with the compromises of the extra-dense packaging. If someone else needs quad SLI or quad Crossfire, well, fine... to each their own. But if dual GPUs is the goal, I truly don't see any advantage of a dual-GPU card over dual single-GPU cards, and plenty of disadvantages.
Like I said... more of a stunt than a product. Cool that it exists, but less useful than advertised except for extremely narrow niches.
mino - Tuesday, March 8, 2011 - link
Even -2- years since the release of the original Crysis, “but can it run Crysis?” is still an important question, and for -3.5- years the answer was “no.”Umm, you sure bout both those time values?
:)
Nice review, BTW.
MrSpadge - Tuesday, March 8, 2011 - link
"With a 375W TDP the 6990 should consume less power than 2x200W 6950CF, but in practice the 6950CF setup consumes 21W less. Part of this comes down to the greater CPU load the 6990 can create by allowing for higher framerates, but this doesn’t completely explain the disparity."If it hasn't been mentioned before: guys, this is simple. The TDP for the HD6950 is just for the PowerTune limit. The "power draw under gaming" is specified at ~150 W, which is just what you'll find during gaming gaming tests.
Furthermore Cayman is run at lower voltage (1.10 V) and clocks and with less units on HD6950, so it's only natural for 2 of these to consume less power than a HD6990. Summing it up one would expect 1.10^2/1.12^2 * 800/830 * 22/24 = 85,2% the power consumption of a Cayman on HD6990.
MrS
mino - Tuesday, March 8, 2011 - link
You shall not hit them so hard next time. :)Numbers tend to hurt one's ego badly if properly thrown.
geok1ng - Tuesday, March 8, 2011 - link
The article points that the 6990 runs much closer to 6950CF than 6970CF.I assume that the author is talking about 2GB 6950, that can be shader unlocked, in a process much safer than flashing the card with a 6970 BIOS.
It would be interesting to see CF numbers for unlocked 6950s.
As it stands the 6990 is not a great product: it requires an expensive PSU, a big case full of fans, at price ponit higher than similar CF setups.
Considering that there are ZERO enthuasiast mobos thah wont accept CF, the 6990 becomes a very hard sell.
Even more troubling is the lack of a DL-DVI adapter in the bundle, scaring way 30" owners, precisely the group of buyers most interested in this video card.
Why should a 30" step away from a 580m or SLI 580s, if the 6990 the same expensive PSU, the same BIG case full of fans and a DL-DVI adapter costs more than teh price gap to a SLI mobo?
Thanny - Tuesday, March 8, 2011 - link
This card looks very much like the XFX 4GB 5970 card. The GPU position and cooling setup is identical.I'd be very interested to see a performance comparison with that card, which operates at 5870 clock speeds and has the same amount of graphics memory (which is not "frame buffer", for those who keep misusing that term).
JumpingJack - Wednesday, March 9, 2011 - link
:) Yep, I wished they would actually make it right.
The frame buffer is the amount of memory to store the pixel and color depth info for a renderable frame of data, whereas graphics memory (or VRAM) is the total memory available for the card which consequently holds the frame buffer, command buffer, textures, etc etc. The frame buffer is just a small portion of the VRAM set aside and is the output target for the GPU. The frame buffer size is the same for every modern video card on the planet at fixed (same) resolution. I.e. a 1900x1200 res with 32 bit color depth has a frame buffer of ~9.2 MB (1900x1200x32 / 8), if double or tripled buffered, multiply by 2 or 3.
Most every techno site misapplies the term "frame buffer", Anandtech, PCPer (big abuser), Techreport ... most everyone.
Hrel - Wednesday, March 9, 2011 - link
Anyone wanting to play at resolutions above 1080p should just buy two GTX560's for 500 bucks. Why waste the extra 200? There's no such thing as future proofing at these levels.wellortech - Wednesday, March 9, 2011 - link
If the 560s are as noisy as the 570, I think I would rather try a pair of 6950s.HangFire - Wednesday, March 9, 2011 - link
And you can't even bring yourself to mention Linux (non) support?You do realize there are high end Linux workstation users, with CAD, custom software, and OpenCL development projects that need this information?