Given the very tiny lead of the GTX 480M, I'm very much looking forward to the next enthusiast mobile graphics products from AMD. Given that the Mobility 5870 has a 50TDP and is essentially a desktop R5770, they may be able to cram an underclocked desktop R5870 into a 100W TDP like the GTX 480M, maybe call it the Mobility 5970? Ah well, it will be exciting to see what the Mobility 6870 brings to the table, I'm assuming we'll see a Southern Islands-derived mobile GPU lineup.
Sorry to bring this up here, but the front page carousel is killing the front page performance. I've heard lot's mention this over time, and it's now started happening to me. I think some random update, possibly to Flash or Firefox has caused this for me.
Is this problem being acknowledge or ignored? I kinda expect more form a site like this, with this much traffic.
If you're not at native size (i.e. no magnification), performance is okay. I'm on a quad-core 3.2GHz Kentsfield system, and the main page is fine normally but if I magnify suddenly it's super slow. Like, peg a core of my CPU at 100% for a couple seconds slow. If you were on a slower system, I imagine it would be terrible.
FWIW, I believe we're talking about killing the carousel. I thought it sounded like a good idea in the design phase, but in practice I don't like it that much.
I just tried on my work computer (3GHz Q6600) and I get processor usage spiking to about 28% spread across 2-3 cores when the carousel shifts. Using the keyboard buttons to magnify doesn't change the processor usage any.
I never look at it though, without any defined beginning and end I find myself having to watch the whole thing to see what might be new, it is far easier to just look at the static listing.
Forgot that I use magnification for this site. It's definitely the main cause of the huge performance hit, ouch! (dual-core, pretty fast machine really).
I think it would be a lot easier if the space now used for the carousel became something static along the lines of Engadget's chunk for "top stories". It's nice to have something there to point out important reviews/news -- I wouldn't want to see the idea completely gone, it's just a carousel is so December 2009 :-)
While it seems generally true that power keeps increasing from generation to generation (3870, 4870, 5870), wasn't the big drop from the HD2900 series conveniently left out to make that statement stick?
It's not really that power always increases, there's a ceiling which was reached a few generations ago and the only thing you can say is that the latest generations are generally closer to that ceiling than most of the ones before it. What the desktop GTX480 pulls is about the most what we will ever see in a desktop barring some serious cooling/housing/power redesigns.
2900 was the P4 of the gfx card world regardings power/performance. It was only released because ATi had to have something, anything, in the marketplace. If ATi had as much cash in the bank as did Intel, they would have cancelled the 2900 like Intel did Larrabee.
Thankfully the 2900 went on from its prematurity to underpin radeons 3, 4 and 5. Whereas Prescott was just brute force attempting to beat thermodynamics. Ask Tejas what won :)
That's why I said "generally trending up". When the HD 2900 came out, I'm pretty sure most people had never even considered such a thing as a 1200W PSU. My system from that era has a very large for its time 700W PSU for example. The point of the paragraph is that while desktops have a lot of room for power expansion, there's a pretty set wall on notebooks right now. Not that I really want a 350W power brick.... :)
Thank you for the article as many of us (from an interest standpoint and not necessarily from a buyer's standpoint) were waiting for the 480M in the wild.
My major complaint with the article is that this is essentially a GPU review. Sure it's in a laptop since this is a notebook, but the only thing discussed here was the difference between GPU's.
With that being the case why is there no POWER CONSUMPTION numbers when gaming? It's been stated for almost every AVA laptop that these are glorified portable desktop computers with batteries that are essentially used only for moving from one outlet to the next.
I think the biggest potential pitfall for the new 480M is to see with performance only marginally better than the 5870 (disgusts me to even write that name due to the neutered design) is to see how much more power it is drawing from the wall during these gaming scenarios.
Going along with power usage would be fan noise, of which I see nothing mentioned in the review. Having that much more juice needed under load should surely make the fan noise increased compared to the 5870....right?
These are two very quick measurements that could be done to beef up the substance of an otherwise good review.
We're working to get Dustin a power meter. Noise testing requires a bit more hardware so probably not going to have that for the time being unfortunately. I brought this up with Anand, though, and when he gets his meter Dustin can respond (and/or update the article text).
"Presently the 480M isn't supported in CS5; in fact the only NVIDIA hardware supported by the Mercury Playback Engine are the GeForce GTX 285 and several of NVIDIA's expensive workstation-class cards."
I did the following with my 1GB 9800GT and it's an incredible boost. Multiple HD streams with effects without pausing.
I figured out how to activate CUDA acceleration without a GTX 285 or Quadro... I'm pretty sure it should work with other 200 GPUs. Note that i'm using 2 monitors and there's a extra tweak to play with CUDA seamlessly with 2 monitors. Here are the steps: Step 1. Go to the Premiere CS5 installation folder. Step 2. Find the file "GPUSniffer.exe" and run it in a command prompt (cmd.exe). You should see something like that: ---------------------------------------------------- Device: 00000000001D4208 has video RAM(MB): 896 Device: 00000000001D4208 has video RAM(MB): 896 Vendor string: NVIDIA Corporation Renderer string: GeForce GTX 295/PCI/SSE2 Version string: 3.0.0 OpenGL version as determined by Extensionator... OpenGL Version 2.0 Supports shaders! Supports BGRA -> BGRA Shader Supports VUYA Shader -> BGRA Supports UYVY/YUYV ->BGRA Shader Supports YUV 4:2:0 -> BGRA Shader Testing for CUDA support... Found 2 devices supporting CUDA. CUDA Device # 0 properties - CUDA device details: Name: GeForce GTX 295 Compute capability: 1.3 Total Video Memory: 877MB CUDA Device # 1 properties - CUDA device details: Name: GeForce GTX 295 Compute capability: 1.3 Total Video Memory: 877MB CUDA Device # 0 not choosen because it did not match the named list of cards Completed shader test! Internal return value: 7 ------------------------------------------------------------ If you look at the last line it says the CUDA device is not chosen because it's not in the named list of card. That's fine. Let's add it.
Step 3. Find the file: "cuda_supported_cards.txt" and edit it and add your card (take the name from the line: CUDA device details: Name: GeForce GTX 295 Compute capability: 1.3 So in my case the name to add is: GeForce GTX 295
Step 4. Save that file and we're almost ready.
Step 5. Go to your Nvidia Drivercontrol panel (im using the latest 197.45) under "Manage 3D Settings", Click "Add" and browse to your Premiere CS5 install directory and select the executable file: "Adobe Premiere Pro.exe"
Step 6. In the field "multi-display/mixed-GPU acceleration" switch from "multiple display performance mode" to "compatibilty performance mode"
Step 7. That's it. Boot Premiere and go to your project setting / general and activate CUDA
That's really cool. Thanks for the post. My cousin does Adobe work and I belive has the GTS 250 with either 512 or 1gig memory. I'll have to try this out the next time I'm over his place.
The best part is that Adobe is aware of this tweak and has no plans to "turn it off". While using this method is not officially supported, it appears to be unofficially encouraged.
I like the idea of a 1000-1500 dollar gaming notebook for moderate gaming, but I dont think this notebook is anywhere near worth the price. For 3000 dollars, one could buy a mid level notebook for moderate gaming and buy/build a 1500 dollar desktop that would have excellent performance.
I'm still not satisfied with their naming scheme. I do think this is a step in the right direction though. This time at least the name refers to the correct architecture. But the GTX 480M isn't a mobile version of a GTX 480. It's more like a GTX 465M. And this isn't just a Nvidia problem. The Mobility 5870 isn't a mobile version of a 5870.
I think the idea of naming laptop cards after desktop cards is flawed to begin with. Instead, laptop cards should have their own series name. Then the name would never be misleading. Then the ATI Mobility <Series Name> could be based off the desktop Juniper chip and nobody would care. The name wouldn't refer to something that the card isn't. Hopefully that made sense.
I also wanted to say that I've really been digging the articles AT has been putting out lately. Very thorough and informative analysis. Keep it up!
I completely agree. The 480M isn't "properly named". It should be named 465M.
Also, I could care less who (nVidia or ATI) has the 'fastest' card as long as it's practical... MSI has a new laptop (reviewed here on AT) that still gets 2~3 hrs of battery life with a mobility 5870. In my mind, the superior product is the one that can actually be used not plugged in all of the time. And I don't need to re-hash all of the impractical reasons to get the desktop fermi... I still can't get the "epic fail" taste out of my mouth from this series of graphics cards from nVidia.
The thing is, at least the 480M is the same freaking silicon as the desktop 480. It may be crippled, but it's the same chip. The same can't be said about...well...pretty much anything else in Nvidia's mobile lineup. ATI was doing well in the 4 series, but their 5 series is nearly as bad. 5700s = desktop 5600s, 5800s = desktop 5700s.
That doesn't make sense. The desktop 470 and 465 are also "crippled" versions of the 480, but at least they are appropriately named. That's the point.
"NVIDIA's GTX 480M uses the same cut-down core found in desktop GeForce GTX 465 cards."
480M: 352 CUDA Cores 256-bit GDDR5
GTX465 Desktop: 352 CUDA Cores 256-bit GDDR5
GTX480 Desktop: 480 CUDA Cores 384-bit GDDR5
So logically, if the 480M is the SAME as the desktop 465... then it should be called the 465M, not the 480M. Technically speaking, NVIDIA does NOT make a mobile GTX 480. It's misleading and just plain nonsense.
I really don't care one bit anymore about DX9. Please stop putting this in your testing. I doubt anyone else cares about DX9 numbers anymore... I mean why not put in DX8 numbers too???
And why are DX9 numbers only shown for nVidia products? Are they asking you to do so?
And you say that in DIRT 2 it holds a 25% lead- yeah when comparing the DX9 numbers of the 480M to the DX11 numbers of the mobility 5870. The real difference is actually .1 fps (look at the W880CU-DX11 line)... yep I'm not reading the rest of this article. Not a very well written article...
You're not reading the chart properly. We put DX11 in the appropriate lines and colored then different for a reason. Bright green compares with blue, and dark green compares with purple. The yellow/orange/gold are simply there to show performance at native resolution (with and without DX11).
In DX9, 480M gets 79.6 vs. 5870 with 59.9. 480M wins by 33% In DX11, 480M gets 60.0 vs. 5870 with 48.1. 480M wins by 25%.
As for including DX9, it's more a case of using something other than DX11 for cards that don't support DX11, as well as a check to see if DX11 helps or hurts performance. DiRT 2 doesn't have a DX10 path, so we drop to DX9 if we don't have DX11 hardware. Another example, in Metro 2033 enabling DX11 results in a MASSIVE performance hit. So much so that on notebooks it's essentially useless unless you run at 1366x768 with a 480M.
While it's swell that you don't care about DX9 anymore, the fact is that a substantial number of games released today STILL use it. DX10 never really took off, and while DX11 is showing strong signs of adoption moving forward, a large number of games still run in DX9 mode.
Is the author an NVIDIA fanboi? Apparently the 5870M is anemic while the 480M is the "fastest mobile GPU on the planet". Of course the more moderate comments are hidden in the details while "fastest on the planet" is screamed in bold letters.
Never mind that unless you have an FPS counter on your display you couldn't tell the difference, apparently a few extra FPSs and a name that starts with "N" is all you need to get a glowing review complete with stupendous superlatives.
Also apparently it is OK to dismiss certain games because they are known to favour ATI hardware. But lets not mention anything about cough, Far Cry, cough.
I'd love to see what NVIDIA thinks of your comment, because I know they felt Dustin was overly harsh. He's also been accused of being an AMD Fanboi, so apparently he's doing his job properly. ;-)
The gaming performance is a case of looking at what's required to play a game well, as well as focusing on the big picture. Sure, L4D2 is a Source engine game and favors AMD architectures traditionally. It also happens to run at 62 FPS 1080p with 4xAA (which is faster than the 58 FPS the 5870 manages at the same settings). Mass Effect 2 performance has changed quite a bit between driver versions on 5870, and it isn't as intensive as other games. Just because 5870 leads at 1600x900 in two out of nine titles doesn't mean it's faster. At 1080p the margins generally favor 480M, and with 4xAA enabled they favor it even more.
Even with that, we go on to state that the 480M doesn't deliver a resounding victory. It's the world's fastest mobile GPU, yes, but it ends up being 10-15% on average which is hardly revolutionary. I said the same thing in the G73Jh review on the 5870, and it got an editor's choice while this doesn't. Seriously, read the conclusion (pages 6 and 7) and tell me that we come off as being pro-NVIDIA and anti-AMD in this review.
I did re-read those parts also. I didn't even notice it myself on the first reading, but I can see how one would see some "ATI bashing" (although I would not use that strong a word), in that the article is about the 480M, but you spend a considerable amount of time criticizing(justifiably) the HD5870M. It just seems that you emphasized the shortcomings of the ATI part in an article primarily about the 480M, while being rather easy on the 480M itself in most sections. That said, I dont think you are unfair in general or intentionally, I just think the article was somewhat skewed in that particular section. And actually, as you are, I am quite disappointed in both GPUs, but more-so in the 480m in that it is more expensive and power hungry for a rather small performance increase.
I disagree. The "bashing" that's done for the 5870M I think sets the tone for how "lame" the 480M ultimately is.
I found that the bashing of the 5870 really brought to me in perspective just how relatively uninteresting the 480M really is. I mean, if the 5870 was only marginally faster than an "archaic" G92 part, what does that say about NVidia's self-proclaimed ushering in a "revolution" in graphical performance?
I see it as a giant "thud", much like the GTX465 is alluded to in page 5..
As I mentioned, I did see the more moderate comments, what I was trying to get across was that the attention grabbing headline was out of balance with the actual review.
And if you discount one game for being favoured by ATI then you should probably mention Far Cry being favoured by NVIDIA. Those type of issues are being highlighted again with recent revelations that NVIDIA is hobbling CPU benchmarks of PhysX performance with unoptimized code.
One additional comment, it is always difficult to compare graphs with long titles for the different configurations, especially when the colors and the order keep changing.
Every time someone charges me with an Nvidia bias, an angel gets its wings.
When I write I have to try and remove my own personal biases from the material, so the fact that my printed bias swings in the exact opposite direction as my personal one (all of my machines with dedicated GPUs are running Radeons), I feel like I've achieved something.
Yes, I own one. It plays Fallout 3 at four to five FPS at 1280 x 800 and has developed 28 vertical lines on the screen. But, my XPS Gen 2 is still my front line pc for a few reasons: 1) it's paid off, 2) it runs XP satisfactorily for general computing, 3) although it was "flashy" back in it's day, it is not nearly as terrible looking as most "gaming" laptops these days, 4) and, it HAS ports on the back! With that said, this base chassis has to be one of the best looking laptops on the market. It is just difficult to justify if you are also considering a desktop PC.
How about a give-away with one of these as the prize!
Ha ha, and thanks for the article Anandtech and Dustin.
The Rumors suggest GF104 would actually have the same Core as the current 465 without the wasted transistor. I am wondering if those wasted transistors will leak power as well?
If so, then with the better yield and leakage improvement from GF104, we could expect an even more powerful GTX480M, or a lower power version of GTX480M with smaller die, less heat, less power, same performance.
Until then, i am waiting for a better power management, tweaked version of Fermi with 28nm LP die shrink on laptop/ Notebook.
bull shit man they are selling dinosaurs at the age of aliens......kind of funny, that a few stupids will still buy them for the ad and all....... for a laptop that performs lower than a desktop and cant play when its unplugged :)
Works on the GTX470 and GTX480 so should also work with the mobile versions. Makes A WORLD of difference and a huge boost to users of Premiere. Especially when dealing with RED material or Canons DSLR-line.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
46 Comments
Back to Article
my_body_is_ready - Thursday, July 8, 2010 - link
Any news on what ASUS will be doing with this chip? I hear they are refreshing their G series and adding 3D VisionJarredWalton - Thursday, July 8, 2010 - link
If ASUS doesn't someone else will. I suspect we'll see that sort of notebook come fall.drfelip - Thursday, July 8, 2010 - link
IIRC the Asus G73JW is going to sport a GTX 480M, but probably a downclocked one...LtGoonRush - Thursday, July 8, 2010 - link
Given the very tiny lead of the GTX 480M, I'm very much looking forward to the next enthusiast mobile graphics products from AMD. Given that the Mobility 5870 has a 50TDP and is essentially a desktop R5770, they may be able to cram an underclocked desktop R5870 into a 100W TDP like the GTX 480M, maybe call it the Mobility 5970? Ah well, it will be exciting to see what the Mobility 6870 brings to the table, I'm assuming we'll see a Southern Islands-derived mobile GPU lineup.blyndy - Thursday, July 8, 2010 - link
Isn't ATI supposed to release some new mobile parts about now?james.jwb - Thursday, July 8, 2010 - link
Sorry to bring this up here, but the front page carousel is killing the front page performance. I've heard lot's mention this over time, and it's now started happening to me. I think some random update, possibly to Flash or Firefox has caused this for me.Is this problem being acknowledge or ignored? I kinda expect more form a site like this, with this much traffic.
Using Firefox.
JarredWalton - Thursday, July 8, 2010 - link
If you're not at native size (i.e. no magnification), performance is okay. I'm on a quad-core 3.2GHz Kentsfield system, and the main page is fine normally but if I magnify suddenly it's super slow. Like, peg a core of my CPU at 100% for a couple seconds slow. If you were on a slower system, I imagine it would be terrible.FWIW, I believe we're talking about killing the carousel. I thought it sounded like a good idea in the design phase, but in practice I don't like it that much.
tommy2q - Thursday, July 8, 2010 - link
the carousel is a cpu hog and makes the front page harder/slower to browse for information because it takes up way too much space...B3an - Thursday, July 8, 2010 - link
It would be better to keep it, but make it Flash. For any sort of animations Flash runs much better with less CPU usage - if done right.I make stuff like this all the time, you're looking at around 2 - 4% CPU usage with Flash on a average quadcore. Even an Atom CPU would easily cope.
But Anand seems to be a big crApple supporter, so i cant see that happening.
strikeback03 - Thursday, July 8, 2010 - link
I just tried on my work computer (3GHz Q6600) and I get processor usage spiking to about 28% spread across 2-3 cores when the carousel shifts. Using the keyboard buttons to magnify doesn't change the processor usage any.I never look at it though, without any defined beginning and end I find myself having to watch the whole thing to see what might be new, it is far easier to just look at the static listing.
james.jwb - Thursday, July 8, 2010 - link
Forgot that I use magnification for this site. It's definitely the main cause of the huge performance hit, ouch! (dual-core, pretty fast machine really).I think it would be a lot easier if the space now used for the carousel became something static along the lines of Engadget's chunk for "top stories". It's nice to have something there to point out important reviews/news -- I wouldn't want to see the idea completely gone, it's just a carousel is so December 2009 :-)
Spoelie - Thursday, July 8, 2010 - link
While it seems generally true that power keeps increasing from generation to generation (3870, 4870, 5870), wasn't the big drop from the HD2900 series conveniently left out to make that statement stick?It's not really that power always increases, there's a ceiling which was reached a few generations ago and the only thing you can say is that the latest generations are generally closer to that ceiling than most of the ones before it. What the desktop GTX480 pulls is about the most what we will ever see in a desktop barring some serious cooling/housing/power redesigns.
bennyg - Thursday, July 8, 2010 - link
2900 was the P4 of the gfx card world regardings power/performance. It was only released because ATi had to have something, anything, in the marketplace. If ATi had as much cash in the bank as did Intel, they would have cancelled the 2900 like Intel did Larrabee.Thankfully the 2900 went on from its prematurity to underpin radeons 3, 4 and 5. Whereas Prescott was just brute force attempting to beat thermodynamics. Ask Tejas what won :)
JarredWalton - Thursday, July 8, 2010 - link
That's why I said "generally trending up". When the HD 2900 came out, I'm pretty sure most people had never even considered such a thing as a 1200W PSU. My system from that era has a very large for its time 700W PSU for example. The point of the paragraph is that while desktops have a lot of room for power expansion, there's a pretty set wall on notebooks right now. Not that I really want a 350W power brick.... :)7Enigma - Thursday, July 8, 2010 - link
Thank you for the article as many of us (from an interest standpoint and not necessarily from a buyer's standpoint) were waiting for the 480M in the wild.My major complaint with the article is that this is essentially a GPU review. Sure it's in a laptop since this is a notebook, but the only thing discussed here was the difference between GPU's.
With that being the case why is there no POWER CONSUMPTION numbers when gaming? It's been stated for almost every AVA laptop that these are glorified portable desktop computers with batteries that are essentially used only for moving from one outlet to the next.
I think the biggest potential pitfall for the new 480M is to see with performance only marginally better than the 5870 (disgusts me to even write that name due to the neutered design) is to see how much more power it is drawing from the wall during these gaming scenarios.
Going along with power usage would be fan noise, of which I see nothing mentioned in the review. Having that much more juice needed under load should surely make the fan noise increased compared to the 5870....right?
These are two very quick measurements that could be done to beef up the substance of an otherwise good review.
7Enigma - Friday, July 9, 2010 - link
Really no one else agrees? Guess it's just me then.....JarredWalton - Friday, July 9, 2010 - link
We're working to get Dustin a power meter. Noise testing requires a bit more hardware so probably not going to have that for the time being unfortunately. I brought this up with Anand, though, and when he gets his meter Dustin can respond (and/or update the article text).7Enigma - Monday, July 12, 2010 - link
Thanks Jarred!For all the other laptop types I don't think it matters but for these glorified UPS-systems it would be an important factor when purchasing.
Thanks again for taking the time to respond.
therealnickdanger - Thursday, July 8, 2010 - link
"Presently the 480M isn't supported in CS5; in fact the only NVIDIA hardware supported by the Mercury Playback Engine are the GeForce GTX 285 and several of NVIDIA's expensive workstation-class cards."I did the following with my 1GB 9800GT and it's an incredible boost. Multiple HD streams with effects without pausing.
http://forums.adobe.com/thread/632143
I figured out how to activate CUDA acceleration without a GTX 285 or Quadro... I'm pretty sure it should work with other 200 GPUs. Note that i'm using 2 monitors and there's a extra tweak to play with CUDA seamlessly with 2 monitors.
Here are the steps:
Step 1. Go to the Premiere CS5 installation folder.
Step 2. Find the file "GPUSniffer.exe" and run it in a command prompt (cmd.exe). You should see something like that:
----------------------------------------------------
Device: 00000000001D4208 has video RAM(MB): 896
Device: 00000000001D4208 has video RAM(MB): 896
Vendor string: NVIDIA Corporation
Renderer string: GeForce GTX 295/PCI/SSE2
Version string: 3.0.0
OpenGL version as determined by Extensionator...
OpenGL Version 2.0
Supports shaders!
Supports BGRA -> BGRA Shader
Supports VUYA Shader -> BGRA
Supports UYVY/YUYV ->BGRA Shader
Supports YUV 4:2:0 -> BGRA Shader
Testing for CUDA support...
Found 2 devices supporting CUDA.
CUDA Device # 0 properties -
CUDA device details:
Name: GeForce GTX 295 Compute capability: 1.3
Total Video Memory: 877MB
CUDA Device # 1 properties -
CUDA device details:
Name: GeForce GTX 295 Compute capability: 1.3
Total Video Memory: 877MB
CUDA Device # 0 not choosen because it did not match the named list of cards
Completed shader test!
Internal return value: 7
------------------------------------------------------------
If you look at the last line it says the CUDA device is not chosen because it's not in the named list of card. That's fine. Let's add it.
Step 3. Find the file: "cuda_supported_cards.txt" and edit it and add your card (take the name from the line: CUDA device details: Name: GeForce GTX 295 Compute capability: 1.3
So in my case the name to add is: GeForce GTX 295
Step 4. Save that file and we're almost ready.
Step 5. Go to your Nvidia Drivercontrol panel (im using the latest 197.45) under "Manage 3D Settings", Click "Add" and browse to your Premiere CS5 install directory and select the executable file: "Adobe Premiere Pro.exe"
Step 6. In the field "multi-display/mixed-GPU acceleration" switch from "multiple display performance mode" to "compatibilty performance mode"
Step 7. That's it. Boot Premiere and go to your project setting / general and activate CUDA
therealnickdanger - Thursday, July 8, 2010 - link
Sorry, I should have said for ANY CUDA card with 786MB RAM or more. It's quite remarkable.7Enigma - Thursday, July 8, 2010 - link
That's really cool. Thanks for the post. My cousin does Adobe work and I belive has the GTS 250 with either 512 or 1gig memory. I'll have to try this out the next time I'm over his place.therealnickdanger - Thursday, July 8, 2010 - link
The best part is that Adobe is aware of this tweak and has no plans to "turn it off". While using this method is not officially supported, it appears to be unofficially encouraged.:)
B3an - Thursday, July 8, 2010 - link
Surely NV will be supporting CS5 with atleast the 4xx series? Why only have the 285GTX support it for non-workstation cards?therealnickdanger - Friday, July 9, 2010 - link
NVIDIA doesn't have much say in the matter. It's Adobe's software, Adobe's engine.The 4xx series works exceptionally well with the tweak, so it's a non-issue anyway.
Gunbuster - Thursday, July 8, 2010 - link
Can we get a benchmark with a CrossfireX HD 5870 Laptop?frozentundra123456 - Thursday, July 8, 2010 - link
I like the idea of a 1000-1500 dollar gaming notebook for moderate gaming, but I dont think this notebook is anywhere near worth the price. For 3000 dollars, one could buy a mid level notebook for moderate gaming and buy/build a 1500 dollar desktop that would have excellent performance.angelkiller - Thursday, July 8, 2010 - link
I'm still not satisfied with their naming scheme. I do think this is a step in the right direction though. This time at least the name refers to the correct architecture. But the GTX 480M isn't a mobile version of a GTX 480. It's more like a GTX 465M. And this isn't just a Nvidia problem. The Mobility 5870 isn't a mobile version of a 5870.I think the idea of naming laptop cards after desktop cards is flawed to begin with. Instead, laptop cards should have their own series name. Then the name would never be misleading. Then the ATI Mobility <Series Name> could be based off the desktop Juniper chip and nobody would care. The name wouldn't refer to something that the card isn't. Hopefully that made sense.
I also wanted to say that I've really been digging the articles AT has been putting out lately. Very thorough and informative analysis. Keep it up!
anactoraaron - Thursday, July 8, 2010 - link
I completely agree. The 480M isn't "properly named". It should be named 465M.Also, I could care less who (nVidia or ATI) has the 'fastest' card as long as it's practical... MSI has a new laptop (reviewed here on AT) that still gets 2~3 hrs of battery life with a mobility 5870. In my mind, the superior product is the one that can actually be used not plugged in all of the time. And I don't need to re-hash all of the impractical reasons to get the desktop fermi... I still can't get the "epic fail" taste out of my mouth from this series of graphics cards from nVidia.
Dustin Sklavos - Friday, July 9, 2010 - link
The thing is, at least the 480M is the same freaking silicon as the desktop 480. It may be crippled, but it's the same chip. The same can't be said about...well...pretty much anything else in Nvidia's mobile lineup. ATI was doing well in the 4 series, but their 5 series is nearly as bad. 5700s = desktop 5600s, 5800s = desktop 5700s.therealnickdanger - Friday, July 9, 2010 - link
That doesn't make sense. The desktop 470 and 465 are also "crippled" versions of the 480, but at least they are appropriately named. That's the point."NVIDIA's GTX 480M uses the same cut-down core found in desktop GeForce GTX 465 cards."
480M:
352 CUDA Cores
256-bit
GDDR5
GTX465 Desktop:
352 CUDA Cores
256-bit
GDDR5
GTX480 Desktop:
480 CUDA Cores
384-bit
GDDR5
So logically, if the 480M is the SAME as the desktop 465... then it should be called the 465M, not the 480M. Technically speaking, NVIDIA does NOT make a mobile GTX 480. It's misleading and just plain nonsense.
ATI is no better.
anactoraaron - Thursday, July 8, 2010 - link
I really don't care one bit anymore about DX9. Please stop putting this in your testing. I doubt anyone else cares about DX9 numbers anymore... I mean why not put in DX8 numbers too???And why are DX9 numbers only shown for nVidia products? Are they asking you to do so?
anactoraaron - Thursday, July 8, 2010 - link
it downright tears past the competition in Far Cry 2 and DiRT 2- yeah and people want to NOT play those in DX11... <sigh>anactoraaron - Thursday, July 8, 2010 - link
And you say that in DIRT 2 it holds a 25% lead- yeah when comparing the DX9 numbers of the 480M to the DX11 numbers of the mobility 5870. The real difference is actually .1 fps (look at the W880CU-DX11 line)... yep I'm not reading the rest of this article. Not a very well written article...btw sorry bout the double post earlier...
JarredWalton - Thursday, July 8, 2010 - link
You're not reading the chart properly. We put DX11 in the appropriate lines and colored then different for a reason. Bright green compares with blue, and dark green compares with purple. The yellow/orange/gold are simply there to show performance at native resolution (with and without DX11).In DX9, 480M gets 79.6 vs. 5870 with 59.9. 480M wins by 33%
In DX11, 480M gets 60.0 vs. 5870 with 48.1. 480M wins by 25%.
As for including DX9, it's more a case of using something other than DX11 for cards that don't support DX11, as well as a check to see if DX11 helps or hurts performance. DiRT 2 doesn't have a DX10 path, so we drop to DX9 if we don't have DX11 hardware. Another example, in Metro 2033 enabling DX11 results in a MASSIVE performance hit. So much so that on notebooks it's essentially useless unless you run at 1366x768 with a 480M.
Dustin Sklavos - Thursday, July 8, 2010 - link
While it's swell that you don't care about DX9 anymore, the fact is that a substantial number of games released today STILL use it. DX10 never really took off, and while DX11 is showing strong signs of adoption moving forward, a large number of games still run in DX9 mode.GTVic - Thursday, July 8, 2010 - link
Is the author an NVIDIA fanboi? Apparently the 5870M is anemic while the 480M is the "fastest mobile GPU on the planet". Of course the more moderate comments are hidden in the details while "fastest on the planet" is screamed in bold letters.Never mind that unless you have an FPS counter on your display you couldn't tell the difference, apparently a few extra FPSs and a name that starts with "N" is all you need to get a glowing review complete with stupendous superlatives.
Also apparently it is OK to dismiss certain games because they are known to favour ATI hardware. But lets not mention anything about cough, Far Cry, cough.
JarredWalton - Thursday, July 8, 2010 - link
I'd love to see what NVIDIA thinks of your comment, because I know they felt Dustin was overly harsh. He's also been accused of being an AMD Fanboi, so apparently he's doing his job properly. ;-)The gaming performance is a case of looking at what's required to play a game well, as well as focusing on the big picture. Sure, L4D2 is a Source engine game and favors AMD architectures traditionally. It also happens to run at 62 FPS 1080p with 4xAA (which is faster than the 58 FPS the 5870 manages at the same settings). Mass Effect 2 performance has changed quite a bit between driver versions on 5870, and it isn't as intensive as other games. Just because 5870 leads at 1600x900 in two out of nine titles doesn't mean it's faster. At 1080p the margins generally favor 480M, and with 4xAA enabled they favor it even more.
Even with that, we go on to state that the 480M doesn't deliver a resounding victory. It's the world's fastest mobile GPU, yes, but it ends up being 10-15% on average which is hardly revolutionary. I said the same thing in the G73Jh review on the 5870, and it got an editor's choice while this doesn't. Seriously, read the conclusion (pages 6 and 7) and tell me that we come off as being pro-NVIDIA and anti-AMD in this review.
frozentundra123456 - Thursday, July 8, 2010 - link
I did re-read those parts also. I didn't even notice it myself on the first reading, but I can see how one would see some "ATI bashing" (although I would not use that strong a word), in that the article is about the 480M, but you spend a considerable amount of time criticizing(justifiably) the HD5870M. It just seems that you emphasized the shortcomings of the ATI part in an article primarily about the 480M, while being rather easy on the 480M itself in most sections.That said, I dont think you are unfair in general or intentionally, I just think the article was somewhat skewed in that particular section.
And actually, as you are, I am quite disappointed in both GPUs, but more-so in the 480m in that it is more expensive and power hungry for a rather small performance increase.
erple2 - Thursday, July 8, 2010 - link
I disagree. The "bashing" that's done for the 5870M I think sets the tone for how "lame" the 480M ultimately is.I found that the bashing of the 5870 really brought to me in perspective just how relatively uninteresting the 480M really is. I mean, if the 5870 was only marginally faster than an "archaic" G92 part, what does that say about NVidia's self-proclaimed ushering in a "revolution" in graphical performance?
I see it as a giant "thud", much like the GTX465 is alluded to in page 5..
GTVic - Thursday, July 8, 2010 - link
As I mentioned, I did see the more moderate comments, what I was trying to get across was that the attention grabbing headline was out of balance with the actual review.And if you discount one game for being favoured by ATI then you should probably mention Far Cry being favoured by NVIDIA. Those type of issues are being highlighted again with recent revelations that NVIDIA is hobbling CPU benchmarks of PhysX performance with unoptimized code.
One additional comment, it is always difficult to compare graphs with long titles for the different configurations, especially when the colors and the order keep changing.
Dustin Sklavos - Friday, July 9, 2010 - link
Every time someone charges me with an Nvidia bias, an angel gets its wings.When I write I have to try and remove my own personal biases from the material, so the fact that my printed bias swings in the exact opposite direction as my personal one (all of my machines with dedicated GPUs are running Radeons), I feel like I've achieved something.
GamerDave20 - Thursday, July 8, 2010 - link
Yes, I own one. It plays Fallout 3 at four to five FPS at 1280 x 800 and has developed 28 vertical lines on the screen. But, my XPS Gen 2 is still my front line pc for a few reasons:1) it's paid off,
2) it runs XP satisfactorily for general computing,
3) although it was "flashy" back in it's day, it is not nearly as terrible looking as most "gaming" laptops these days,
4) and, it HAS ports on the back!
With that said, this base chassis has to be one of the best looking laptops on the market.
It is just difficult to justify if you are also considering a desktop PC.
How about a give-away with one of these as the prize!
Ha ha, and thanks for the article Anandtech and Dustin.
Dave (GamerDave20)
iwod - Sunday, July 11, 2010 - link
The Rumors suggest GF104 would actually have the same Core as the current 465 without the wasted transistor. I am wondering if those wasted transistors will leak power as well?If so, then with the better yield and leakage improvement from GF104, we could expect an even more powerful GTX480M, or a lower power version of GTX480M with smaller die, less heat, less power, same performance.
Until then, i am waiting for a better power management, tweaked version of Fermi with 28nm LP die shrink on laptop/ Notebook.
VIDYA - Monday, July 12, 2010 - link
bull shit man they are selling dinosaurs at the age of aliens......kind of funny, that a few stupids will still buy them for the ad and all....... for a laptop that performs lower than a desktop and cant play when its unplugged :)VIDYA - Monday, July 12, 2010 - link
GF104 is the new born baby BTW....this one is lean mean overclocker too!maarek999 - Thursday, July 15, 2010 - link
You can definitely use different Nvidia cards accelerated on Premiere cs5. There is a very simple hack for it:http://www.dvxuser.com/V6/showthread.php?t=209116
Works on the GTX470 and GTX480 so should also work with the mobile versions. Makes A WORLD of difference and a huge boost to users of Premiere. Especially when dealing with RED material or Canons DSLR-line.