The IGP Chronicles Part 1: Intel's G45 & Motherboard Roundup
by Anand Lal Shimpi & Gary Key on September 24, 2008 12:00 PM EST- Posted in
- Motherboards
Competitive Integrated Graphics?
Since the G45 GMCH is built on a 65nm process, it can be larger than G35's GMCH - and thus Intel increased the number of unified shader processors from 8 to 10.
1024 x 768 | Intel G45 | Intel G35 |
Enemy Territory: Quake Wars | 6.8 fps | 5.6 fps |
Company of Heroes | 24.5 fps | 16.5 fps |
Race Driver GRID | 3.7 fps | 2.8 fps |
Age of Conan | 7.9 fps | 6.1 fps |
Crysis | 9.3 fps | 7.9 fps |
Spore | 10.8 fps | 9.7 fps |
Half Life 2 Episode Two | 40.7 fps | 27.9 fps |
Oblivion (800 x 600) | 22.7 fps | 14.7 fps |
These shader processors are nearly directly comparable to NVIDIA's, meaning that in terms of raw processing power the G45 GMCH has 1/24th the execution resources of NVIDIA's GeForce GTX 280. It gets even worse if you compare it to a more mainstream solution - take the recently announced GeForce 9500 GT for example, it only has 32 SPs - putting the G45 at around 1/3 the clock-for-clock power of a 9500 GT.
Then there's the clock speed issue. While the GeForce 9500 GT runs its array of SPs at 1.4GHz, Intel runs its shader processors at 800MHz. Both Intel and NVIDIA's architectures have a peak throughput of one shader instruction per clock, so while the 9500 GT has 3x the execution resources of the G45 GMCH, it also has 75% clock speed advantage giving it a 4.6x raw throughput advantage over the G45 GMCH.
But how about comparing Intel's graphics core to NVIDIA's IGP equivalent? The GeForce 8200 is NVIDIA's latest integrated graphics core, it has 8 SPs and runs them at a clock speed of 1.2GHz - giving NVIDIA a 20% advantage in raw throughput on paper.
There are many unknowns here however. NVIDIA has special execution units for transcendentals and it's unclear whether or not Intel has the same. There are also times when Intel must relegate vertex processing to the CPU, which can cause strange performance characteristcis. But the point is that Intel's latest graphics core, at least on paper, is competitive to what NVIDIA is offering.
Neither the Intel or NVIDIA solution can hold a candle to AMD's 780G, which has a peak throughput of 40 shader operations per clock compared to 10 for Intel and 8 for NVIDIA. The reason AMD can do so much more is that each of its 8 processing clusters is 5-wide, just like on its desktop GPUs. If there's enough parallel data to work on, each one of these clusters can output five shader instructions per clock. The real world utilization is somewhere between one and five depending on how efficient AMD's real time compiler is and the code being run, but this generally translates into AMD dominating the IGP performance charts even with a lower clock speed than both Intel and NVIDIA parts.
Does this all really matter?
This next point is one that I've quietly argued for the past few years. ATI and NVIDIA have always acted holier than thou because of their IGP performance superiority over Intel, but I argue that they are no better than the boys in blue.
Despite both ATI and NVIDIA being much faster than Intel, the overall gameplay experience delivered by their integrated graphics solutions is still piss poor. Even on older games. Try running Oblivion, a 2.5-year old title, on even AMD's 780G and you'll find that you have to run it at the lowest visual quality settings, at the lowest resolutions (800 x 600, max) to get playable frame rates. At those settings, the game looks absolutely horrible.
In those games that aren't visually demanding, performance doesn't actually matter and all three vendors end up doing just fine. Fundamentally both ATI and NVIDIA want to sell more discrete cards, so they aren't going to enable overly high performance integrated solutions. The IGP performance advantages in games amount to little more than a marketing advantage, since anyone who actually cares about gaming is going to be frustrated even by their higher performing integrated solution.
The area where ATI/NVIDIA deliver where Intel historically hasn't is in the sheer ability to actually run games. In the past, driver issues and just basic compatibility with most games was simply broken on Intel hardware. Intel tried to address much of that with G45.
There is one aspect of IGP performance that really matters these days however: video decode acceleration.
53 Comments
View All Comments
Butterbean - Wednesday, September 24, 2008 - link
I'm not sure why this board is measured/reviewed for its gaming ability (or lack of). A lot of HTPC peeps get these because they are quiet and can play DVD's without the noise /heat. Not many people really expect to play Oblivion on it.8steve8 - Wednesday, September 24, 2008 - link
the dg45id has the unique ability to ouptut simultaniously to two displays with a digital interface.imo its the prefect board for non-gamers with dual-monitors..
seriouosly.. analog sucks.
should be listed in the pros/cons.
CSMR - Wednesday, September 24, 2008 - link
Yes, a very important feature for a work system with integrated graphics. Presumably common to all G45 boards with DVI and hdmi?yehuda - Saturday, September 27, 2008 - link
No, the Gigabyte board can't do that even though it has both ports.http://download.gigabyte.ru/manual/motherboard_man...">http://download.gigabyte.ru/manual/motherboard_man... (p. 8, footnote 1)
npp - Wednesday, September 24, 2008 - link
SPCR measured the power consumtion of the same mini-ITX G45 board and found it to consume 35W at idle with an E7200 CPU installed (which should consume a tiny bit more than a 5200, given it works at higher FSB speeds and has more cache).Your figures showed something like 57W; one would say, hey, no big deal, we're talking about only 22W here. But if you take this as relative difference - it turns out to be 60%! SPCR used only one DIMM, but I doubt this can explain the discrepancy. The PSU was a 400W model, so I guess it has similar efficiency curve as the Corsair model you used.
Given the strange results of you power consumption measurements recently, I have reasons to doubt that something simply isn't right out there.
CSMR - Wednesday, September 24, 2008 - link
SPCR people will make more efficient choices. Efficient PSU, notebook hard drive, non-overclocked RAM. 57W is a good result for a mainstream review. Little things can add up to 22W, especially PSU efficiency.MadDogMorgan - Wednesday, September 24, 2008 - link
ANAND! These vibrant media popups are KILLING ME!!!!I am about ready to GO INSANE reading your site. You CAN'T POSSIBLY be making any MONEY off those things, they are too INCREDIBLY ANNOYING for anyone to ever THINK about watching one or clicking one.
Oh, and I LIKE PS/2 ports. What's wrong with PS/2 ? It works great, takes less cpu than USB (in my VERY informal mouse testing) and the headers take up very little space on the mobo. You also have the option to use the USB connections instead, if you want.
Anand Lal Shimpi - Wednesday, September 24, 2008 - link
Visit this URL: http://anandtech.com/siteinfo.aspx?off=yes">http://anandtech.com/siteinfo.aspx?off=yesIt'll disable all IntelliTXT on AnandTech for you :)
-A
zagood - Wednesday, September 24, 2008 - link
Wow, thank you! Now how do we do that on DT?MadDogMorgan - Wednesday, September 24, 2008 - link
Thank you VERY MUCH for providing this option.Also, please keep up the good work and I appreciate you spending some time in the HTPC area. It seems to me there is a decided lack of good technical coverage in this arena. The kind of in-depth coverage that only your and a couple of other notable sites provide.
I would like to see some TV Tuner card reviews from your site comparing the technical details of the latest offerings from Hauppauge, ATI and any other popular ones. Toss in a review of a few PVR apps like GB-PVR, SageTV, MythTV and BeyondTV and (HTPC) life would be complete. Don't forge to address the difficulty of getting the channel listings when using a freebe like GB-PVR, or the ins and outs of getting scheduled recordings to actually WORK when the app uses the Windows Task Scheduler.
Thanks Again.