It still seems to me that this misses where it would benefit most: 13 inch laptops, which currently mostly use dual core processors. GT3e would make something like the Retina MBP 13" much more appealing for instance, but it's paired with processors such that the wattage would be too high.
Yes, it's a dedicated cache for both the CPU and the GPU. However it's very unlikely you're going to run into any scenario that uses a Crystalwell-equipped part in such a manner. It's not being sold in socket form, so it will go to OEMs, who in turn would only use it if they didn't include a dGPU.
So pretty much, unless you've got some huge beefy GPU that would absolutely suck up power compared to just using Iris Pro graphics, no one would opt for that SKU?
Right on. A dual core model for the 13" rMPB would have me selling my 2012 immediately. Now I need to decide if I can live with the 15" or even bother.
If i interpreted the results of this article correctly, I suspect that the 15" MBP is probably going to get a new and even thinner form factor with this refresh. (one chip less, fewer VRM related parts, lower combined TDP)
A 15" rMBP approaching the weight of a 15" Macbook Air would be very interesting, although a part of me hoped that Apple would wait until Broadwell to ditch the dGPU in the 15".
Such a step back in GPU performance with the Retina display is surely not going to be very pleasant in 3D applications.
I actually hope/suspect, that Apple will go the other road: use a discrete graphic solution on the 15" rMBP until Broadwell comes out, but have a cTDPdown version of the 4850HQ on the 13" rMBP. Maybe they can even get the normal TDP version in there; after all it has the same (good) cooling the 15" rMBP has and I have never heard the fans on mine. I think Apple really designed it with Haswell in mind, so let's see what they'll bring on during the next few weeks.
Even for a given power usage the 650M isn't even to on the top of the list for highest end discrete GPU.... The top at the moment for lowest wattage to power ratio would be the 765M, even the Radeon HD 7750 has less wattage and a tad more power than the 650M. Clearly someone did not do their researching before opening their mouth.
I'm gonna go out on a limb and say that vFunct is one of those Apple fanboys that knows nothing about performance. You can get a PC laptop in the same size and have better performance than any Macbook available for $500 less. Hell you can even get a Tablet with an i7 and 640M that'll spec out close to the 650M for less than a Macbook Pro with 650M.
The Iris Pro 5200 would be ideal for both machines. Pro users would benefit from ECC memory for the GPU. The Iris chip uses ECC memory making it ideal for OpenCL workloads in Adobe CS6 or Final Cut X. Discrete mobile chips may produce errors in the OpenCL output. Gamers would probably prefer a discrete chip, but that isn't the target for these machines.
I think Apple cares more about the OpenCL performance which is excellent on the Iris. I doubt the 15" will have a discrete GPU. There isn't one fast enough to warrant it over the Iris 5200. If they do ever put a discrete chip back in, I hope they go with ECC GDDR memory. My guess is space savings will be used for more battery. It is also possible they may try to reduce the display bezel.
It's never had the highest end chip, just the best "upper midrange" one. Above the 8600m GT was the 8800m GTX and GTS, and above the 650m there was the 660, a couple 670 versions, the 675 versions, and the 680.
They chose the highest performance part that hit a specific TDP, stretching a bit from time to time. It was generally the case that anything which outperformed the MBP was either a thick brick, or had perpetual overheating issues.
It wouldn't surprise me if the 15in just had the "beefed up" iris pro honestly. They might even get their own, special even more overclocked than 55w version.
Mainly, because it wouldn't be without precedent. Remember when the 2009 15in macbook pro had a 9400m still? Or when they dropped the 320m for the hd3000 even though it was slightly slower?
They sometimes make lateral, or even slightly backwards moves when there are other motives at play.
That's just crazy talk, they want drop dedicated graphics. The difference is still too big, plus you can't sell a $2000 laptop without a dedicated GFX.
It appears to do compute better then graphics (and ECC memory is a plus for compute). That is exactly what pros will be looking for. Apple doesn't cater to the gaming market with these machines even if they should play most games fine. A dedicated gaming machine would be built much different then this.
This, I dont know about anyone else, but i'm not dropping 2 grand or $2700 with upgrades on a 15 incher that does not have dedicated graphics.
Another problem i see is the 13" Retina only uses duals, and if they did use this quad with GT3e silicon, then the price of of the 13" will go up at least $150 since the i7's and i5's the 13" currently use, are sub $300 parts.
The only solution i see is Apple offering it as a build to order/max upgrade option, and even then they risk segmentation across the product line.
"can't sell a $2000 laptop without a dedicated GFX". Absolutely true, especially when the GT3e is still a little slower than the 650M. So the 750M tweaked a few mhz higher will do nicely for the rMBP. The 13 incher will get a boost with the GT3e CPU. So a slight upgrade to lower power cpu maybe worthwhile to some. Improvement to 1080p eyesight camera would be a given for the new rMBP.
You can drop discrete graphics when that $2000+ laptop is using builtin graphics with the same price premium and number of transistors of the discrete chip. I'm almost positive the discrete will go away. I have a feeling that Apple had a say in optimizations and stressed OpenCL performance. That is probably what they will highlight when they announce a new MacBook Pro.
I really hope that Apple continues to treat the rMBP 15 as a flagship. Giving it iGPU only would be a deal breaker for many professionals. Atleast in haswell's current form. Until Intel can make an IGPU that atleast matches or exceeds performance at high resolutions, it is still a no go for me.
Why is that a deal breaker? The Iris 5200 is better then a discrete chip for compute (OpenCL). If you are doing 3D rendering, video editing, photoshop, bioinformatics, etc. that is what you should care about. It also has ECC memory unlike a discrete chip so you know your output is correct. How fast it can texture triangles is less important. It still has plenty of power in that area for any pro app. This is not designed to be a gaming machine. Not sure why anyone would be surprised it may not be optimized for that.
You never know, but I doubt it. They will have trouble with the ports on the side if they make it smaller. I think it is more likely the space saving will go to additional battery. They may be able to get similar battery life increases to the Air with the extra space.
Look at the overheating issues that come with i5/i7 Razer notebooks and finding the same heating noticed in their Haswell notebook press event several days ago.
If Apple decides to use these Haswells which put out heat in a concentrated area and in very thin outlines, you are essentially computing over a mini-bake oven.
Yea laptops benefit most - good for them. But what about the workstation? So intel stopped being a CPU company and turned into a mediocre GPU company? (can even beat last years GT650M) I would applaude the rise in GPU performance if they had not completely forgotten the CPU. M.
While I agree this misses "where it would benefit most", I disagree on just *where* that is.
I guess Intel agrees with Microsofts implicit decision that media center is dead. Real-time HQ quicksync would be perfect to transcode anything extenders couldn't handle, and would also make the scanning for and skipping of commercials incredibly efficient.
The last 13" looks like they were prepping it for a fusion drive then changed their mind leaving extra space in the enclosure. I think it is due for an internal redesign that could allow for a higher wattage processor.
I think the big deal is the OpenCL performance paired with ECC memory for the GPU. The Nvidia discrete processor uses non-ECC GDDR. This will be a big deal for users of Adobe products. Among other things, this solves the issue of using the Adobe mercury engine with non-ECC memory and the resulting single byte errors in the output. The errors are not a big deal for games, but may not be ideal for rendering professional output and scientific applications. This is basically a mobile AMD FireGL or Nvidia Quadro card. Now we just need OpenCL support for the currently CUDA-based mercury engines in After Effects and Premiere. I have a feeling that is coming or Adobe will also lose Mercury Engine compatibility with the new Mac Pro.
Impressive iGPU performance, but I knew Intel was absolutely full of sh!t when claiming equal to or better than GT 650m performance. Not really even close, typically behind by 30-50% across the board.
To be fair, there is only one data point (GFXBenchmark 2.7 T-Rex HD - 4X MSAA) where the 47W cTDP configuration is more than 40% slower than the tested GT 650M (rMBP15 90W). Actually we have the following [min, max, avg, median] for 47W (55W): games: 61%, 106%, 78%, 75% (62%, 112%, 82%, 76%) synth.: 55%, 122%, 95%, 94% (59%, 131%, 102%, 100%) compute: 85%, 514%, 205%, 153% (86%, 522%, 210%, 159%) overall: 55%, 514%, 101%, 85% (59%, 522%, 106%, 92%) So typically around 75% for games with a considerably lower TDP - not that bad. I do not know whether Intel claimed equal or better performance given a specific TDP or not. With the given 47W (55W) compared to a 650M it would indeed be a false claim. But my point is, that with at least ~60% performance and typically ~75% it is admittedly much closer than you stated.
Correct. GT 650M by default is usually 835MHz + Boost, with 4GHz RAM. The GTX 660M is 875MHz + Boost with 4GHz RAM. So the rMBP15 is a best-case for GT 650M. However, it's not usually a ton faster than the regular GT 650M -- benchmarks for the UX51VZ are available here: http://www.anandtech.com/bench/Product/814
Do you know if the scaling algorithms are handled by the CPU or the GPU on the rMBP?
The big thing I am wondering is that if Apple releases a higher-end model with the MQ CPU's, would the HD 4600 be enough to eliminate the UI lag currently present on the rMBP's HD 4000?
If it's done on the GPU, then having the HQ CPU's might actually get *better* UI performance than the MQ CPU's for the rMPB.
worst mistake intel made was that demo with DIRT when it was side by side with a 650m laptop. That set people's expectations. and it falls short in the reviews and people are dogging it. If they would have just kept quite people would be praising them up and down right now.
The performance isn't earth-shattering, but if Intel manages to put out good open-source Linux drivers for Iris Pro, I can't help but feel like this would be a great chip for that; it isn't like you'll be playing Crysis in Ubuntu anytime soon. I kind of want that CRB (or something like it), actually.
I'll bet notebooks with mid-range quad core CPU's and gt 750m discrete graphics will be cheaper than notebooks with Iris Pro enabled iGPU graphics as well. The only benefit would be a slightly slimmer chassis and battery life. Anyone who still wants to game on a notebook is noticeably better off with a mid-range discrete GPU over this.
Would a 47W chip be able to fit into a normal 13" Ultrabook-like chassis like the 13" MacBook Pro with Retina Display? Only an extra 12W TDP to deal with.
This would be awesome and we have to remember that the 47W TDP includes voltage regulation moving off the MB, so the gap is maybe only 8W. The 47 TDP also refers to both CPU and GPU running at full speed, which is an extremely rare scenario - in gaming, the CPU load will probably hover at 50% only.
In any case, if the tested model goes into a rMBP 13" I'm going to buy it before Tim Cook has left the stage.
Thinking to buy a Ivybridge Mac Book Pro for my wife, I guess she will have wait a little longer for this baby. I wish they could fit in a Mac Book Air.
Probably; easily if anand is right about Apple deciding it's good enough to drop the dGPU. Worst case would be Apple taking advantage of the adjustable TDP options to tune the CPU performance/tdp down a bit.
Really impressive! This focus of Intel on graphics will force Nvidia and AMD to push dedicated GPUs forward at a much faster pace at the risk of being destroyed by Intel iGPUs. This couldn't come at a better time with the advent of high resolution screens in notebooks and displays (that new 4K Asus monitor). AMD will need to bring Kaveri with a monster of a iGPU otherwise Intel just nullified the only area where they had any type of advantage.
Iris Pro is exceptionally good, however you have to ask how much faster the 7660D would be with the same memory bandwidth advantage. Additionally, Trinity is hardly going to be in the same sort of systems, and as the GPU is being held back by the CPU part anyway, it does take a little shine off Iris Pro's astounding performance. Even so, well done Intel, on both the hardware and software fronts.
I think its kind of a moot point, Selling something this expensive will not affect AMD or even Nvidia that much. You can get an entire AMD APU based notebook for the cost of just this processor. I love the idea of this being pushed forward but unless Intel can bring it to a lower price point its kind of pointless.
Im probably unique in that I want a quad haswell with the 20EU graphics and a GTX760m dGPU from a latitude (dock!) E6540. Wonder if thats going to happen. Probably not.
Still, this looks damn good for Intel and will only improve over time.
Performance roughly in line with expectations, although the compute performance is a nice surprise. It seems to me like Crystalwell is going into exactly the wrong SKUs and the pricing is borderline atrocious, too.
Anyway, since you bring up the awards and a "new system" for them, something I've been thinking a bit about is how there doesn't seem to be a page on the site where it is explained what each award is supposed to mean and collects all the products that have received them, which I think would be nice.
Interesting that the compute is punches above it's game performance weight. I wonder if they could put more EUs in a chip, maybe a larger eDRAM, and put it on a board as a compute card.
Hmm, so it's heavily hinted at that the next rMBP will ditch discreet graphics. The 5200 is good, but that would still be a regression in performance. Not the first time Apple would have done that, there was the Radeon cut out of the Mini, the 320M to the 3000, even the bottom rung of the newest iMac with the 640m. I wonder if it would at least be cheaper to make up for it.
May I ask why The Sims is never featured in your reviews on such GPU setups?
Why? Well in my line of business, fixing and servicing lots of laptops with the integrated chips the one game group that crops up over and over again is The Sims!
Never had a laptop in from the real world that had any of the games you benchmarked here. But lots of them get The Sims played on them.
Agreed. The benchmark list is curiously disconnected from what these kind of systems are actually used to do in the real world. Seldom does anyone use a laptop of any kind to play "Triple-A" hardcore games. Usually it's stuff like The Sims and WoW. I think those should be included as benchmarks for integrated graphics, laptop chipsets, and low-end HTPC-focused graphics cards.
Because the Sims is much easier to run than most of these. Just because people tried running it on GMA graphics and wondered why it didn't work doesn't mean it's a demanding workload.
Yes but the point is the games tested are pretty much pointless. How many here would bother to play them on such equipped laptops?
Pretty much none.
But plenty 'normal' folks who would buy such equipment will play plenty of lesser games. In my job looking after 'normal' folks thats quite important when parents ask me about buying a laptop for their kid that wants to play a few games on it.
The world and sites such as Anandtech shouldnt just revolve around the whims of 'gamer dudes' especially as it appears the IT world is generally moving away from gamers.
It's a general computing world in future, rather than a enthusiast computing world like it was 10 years ago. I think some folks need to re-align their expectations going forward.
It would help immensely if you would say what you were comparing it to. As you are surely aware, a system that includes an A10-5800K but cripples it by leaving a memory channel vacant and running the other at 1333 MHz won't perform at all similarly to a properly built system with the same A10-5800K with two 4 GB modules of 1866 MHz DDR3 in properly matched channels.
That should be an easy fix by adding a few sentences to page 5, but without it, the numbers don't mean much, as you're basically considering Intel graphics in isolation without a meaningful AMD comparison.
I see Razer making an Edge tablet with an Iris-based chip. In fact, it seems built for that idea more than anything else. That or a NUC HTPC run at 720p with no AA ever. You've got superior performance to any console out there right now and it's in a size smaller than an AppleTV.
So yeah, the next Razer Edge should include this as an optional way to lower the cost of the whole system. I also think the next Surface Pro should use this. So high end x86-based laptops with Windows 8 Pro.
And NUC/BRIX systems that are so small they don't have room for discrete GPU's.
I imagine some thinner than makes sense ultrathins could also use this to great effect.
All that said, most systems people will be able to afford and use on a regular basis won't be using this chip. I think that's sad, but it's the way it will be until Intel stops trying to use Iris as a bonus for the high end users instead of trying to put discrete GPU's out of business by putting these on every chip they make so people start seeing it CAN do a decent job on its own within its specific limitations.
Right now, no one's going to see that, except those few fringe cases. Strictly speaking, while it might not have matched the 650m (or its successor), it did a decent job with the 640m and that's a lot better than any other IGP by Intel.
1) The NUC uses a 17W TDP chip and overheats. We're not going to have Iris in that form factor yet. 2) It would increase the cost of the Edge, not lower it. Same TDP problem too.
Otherwise I agree, this really needs to roll down lower in the food chain to have a serious impact. Hopefully they'll do that with Broadwell used by the GPU when the die area effectively becomes free thanks to the process switch.
So intel was right. Iris Pro pretty much matches a 650m at playable settings (30 fps +). Note that anandtech is being full of BullS**t here and comparing it to an OVERCLOCKED 650m from apple. Lets see, when intel made that 'equal to a 650m' claim it was talking about a standard 650m not an overclocked 650m running at 900/2500 (GDDR5) vs the normal 835/1000 (GDDR5 + boost at full, no boost = 735 mhz core). If you look at a standard clocked GDDR3 variant iris pro 5200 and the 650m are pretty much very similar (depending on the games) within around 10%. New Intel drivers should further shorten the gap (given that intel is quite good in compute).
For the games I tested, the rMBP15 isnt' that much faster in many titles. Iris isn't quite able to match GT 650M, but it's pretty close all things considered.
I would have liked to see some madVR tests. It seems to me that the particular architecture of this chip - lots of computing power, somewhat less memory bandwidth - would be very well suited to madVR's better processing options. It's been established that difficult features like Jinc scaling (the best quality) are limited by shader performance, not bandwidth. The price is far steeper than I would have expected, but once it inevitably drops a bit, I could see mini-ITX boards with this become a viable solution for high-end, passively-cooled HTPCs. By the way, did they ever fix the 23.976 fps error that has been there since Clarkdale?
Anand, would you say the lack of major performance improvement due to crystalwell bodes ill for Xbox one?
The idea is ESRAM could make the 1.2 TF Xbox One GPU "punch above it's weight" with more efficiency due to the 32MB of low latency cache (ALU's will stall less waiting on data). However these results dont really show that for Haswell (the compute results that scale perfectly with ALU's for example).
Here note I'm distinguishing between the cache as bandwidth saver, I think we can all agree it will serve that purpose- and as actual performance enhancer. I'm interested in the latter for Xbox One.
"If Crystalwell demand is lower than expected, Intel still has a lot of quad-core GT3 Haswell die that it can sell and vice versa."
Intel is handicapping demand for GT3e parts by not shipping them in socketed form. I'd love to upgrade my i7-2600k system to a 4770K+GT3e+TSX setup. Seriously Intel, ship that part and take my money.
"The Crystalwell enabled graphics driver can choose to keep certain things out of the eDRAM. The frame buffer isn’t stored in eDRAM for example."
WTF?!? The eDRAM would be the ideal place to store various frequently used buffers. Having 128 MB of memory leaves plenty of room for streaming in textures as need be. The only reason not to hold the full frame buffer is if Intel has an aggressive tile based rendering design and only a tile is stored there. I suspect that Intel's driver team will change this in the future.
"An Ultrabook SKU with Crystalwell would make a ton of sense, but given where Ultrabooks are headed (price-wise) I’m not sure Intel could get any takers."
I bet Apple would ship a GT3e based part in the MacBook Air form factor. They'd do something like lower the GPU clocks to prevent it from melting but they want it. It wouldn't surprise me if Apple managed to negotiate a custom part from Intel again.
Ultimatley I'm pleased with GT3e. On the desktop I can see the GPU being used for OpenCL tasks like physics while my Radeon 7970 handles the rest of the graphics load. Or for anything else, I'd like GT3e for the massive L4 cache.
"Ultimatley I'm pleased with GT3e. On the desktop I can see the GPU being used for OpenCL tasks like physics while my Radeon 7970 handles the rest of the graphics load. Or for anything else, I'd like GT3e for the massive L4 cache."
I'd love that to work, but what developer would include that functionality for that niche setup?
OpenCL is supposed to be flexible enough that you can mix execution targets. This also includes the possibility of OpenCL drivers for CPU's in addition to those that use GPU's. At the very least, it'd be nice for a game or application to manually select the OpenCL target in some config file.
I'm only a noob high school junior, but aren't frame buffers tossed after display? What would be the point of storing a frame buffer? You don't reuse the data in it at all. As far as I know, frame buffer != unpacked textures. Also, aren't most modern fully programmable GPUs not tile based at all? Also, wasn't it mentioned that K-series parts don't have TSX?
The z-buffer in particular is written and often read. Deferred rendering also blends multiple buffers together and at 128 MB in size, a deferred render can keep several in that memory. AA algorithms also perform read/writes on the buffer. At some point, I do see Intel moving the various buffers into the 128 MB of eDRAM as drivers mature. In fairness, this change may not be universal to all games and dependent on things like resolution.
Then again, it could be a true cache for the GPU. This would mean that the drivers do not explicitly store the frame buffers there but can could be stored there based upon prefetching of data. Intel's caching hierarchy is a bit weird as the CPU's L3 cache can also be used as a L4 cache for the GPU under HD2000/2500/3000/4000 parts. Presumably the eDRAM would be a L5 cache under the Sandy Bridge/Ivy Bridge schema. The eDRAM has been described as a victim cache though for GPU operations it would make sense to prefetch large amounts of data (textures, buffers). It'd be nice to get some clarification on this with Haswell.
PowerVR is still tile based. Previous Intel integrated solutions were also tile base though they dropped that with the HD line (and I can't remember if the GMA line was tile based as well).
And you are correct that the K series don't have TSX, hence why I'd like a 4770K with GT3e and TSX. Also I forgot to throw in VT-d since that too is arbitrarily disabled in the K series.
Kevin G: Intel dropped the Tile-based rendering in the GMA 3 series generation back in 2006. Although, their Tile rendering was different from PowerVR's.
Fair points - I was being a bit myopic and only thought about buffers persisting across frames, neglecting the fact that buffers often need to be reused within the process of rendering a single frame! Can you explain how the CPU's L3 cache is an L4 cache for the GPU? Does the GPU have its own L3 cache already?
Also I don't know whether PowerVR's architecture is considered fully programmable yet. I know they have OpenCL capabilities, but reading http://www.anandtech.com/show/6112/qualcomms-quadc... I'm getting a vague feeling that it isn't as complete as GCN or Kepler, feature wise.
Any idea if this IGP supports 30-bit color and/or 120Hz displays? Currently, laptops like the HP EliteBook 8770w and Dell Precision M6700 haven't been able to use Optimus if you opt for such displays. It would be nice to see that question addressed...
I have been planning on getting a Haswell rMBP 15". I was holding out for Haswell namely due to the increased iGPU performance. My primary issue with the current Ivy Bridge rMBP is the lagginess with much of the UI, especially when there are multiple open windows.
However, I'm a bit concerned about how the Haswell CPU's will compare with the current Ivy Bridge CPU's that Apple is currently shipping with the rMBP. The Haswell equivalent of the current rMBP Ivy Bridge CPU's do not have the Iris Pro, they only have the "slightly improved" HD 4600.
Obviously, we still need to wait until WWDC, but based on the released Haswell info, will Haswell only be a slight bump in performance for the 15" rMBP? If so, that is *very* disappointing news.
This is a huge win for Intel, definitely performance on par with a 650M. It's just as playable on nearly all those games at 1366x768. Even though the 650M pulls away at 1600X900, I wouldn't call either gpu playable in most of those games at that resolution.
you look at it intelligently, this is a huge win by Intel. The 750M may save them, but if I was in the market for an Ultrabook to complement my gaming notebook, I would definitely go with iris pro. Hell, even if I didn't have a dedicated gaming notebook I would probably get iris Peru in my Ultrabook just for the power savings, it's not that much slower at playable resolution.
Iris Pro 5200 with eDRAM is only for the quad core standard notebook parts. The highest available for the Ultrabook is the 28W version, the regular Iris 5100. Preliminary results shows the Iris 5100 to be roughly on par with Desktop HD 4600.
For those commenting about pricing Intel has only released data for the high end Iris Pro enabled SKUs at this point and cheaper ones are due later. The high end chips are generally best avoided due to being poor value so stay tuned.
I'm looking at getting a Haswell 15" Ultrabook with 16GB RAM and plenty of SSD to run up come fairly sophisticated Cisco, Microsoft and VMware cloud labs.
Is it likely that the Crystalwell cache could offset the lower performance specifications on the 4950HQ to make it as competitive, or more so, against the 4900MQ in this scenario?
It would also be good to understand the performance improvement, for non-game video tasks, the HQ part might have over the 4900MQ on a FHD panel. If the advantage isn't there, then, unless the Crystalwell makes a big difference, the 4900MQ part is likely the one to get.
Question. Why in Kabini reviews did we get the standard "just wait til intel releases their next gen parts to see the real competion OMGBBSAUCE!!" marketing spiel, while not a mention that hsw's competition is Kaveri?
Uhh, because Haswell launch was less than a month away from Kabini, while Kaveri is 6+ months away from Haswell?
AMD paper launched Kabini and Richland in March, and products are coming now. Kaveri claims to be late Q4 for Desktop and early Q1 next year for mobile. If they do the same thing, that means Feb-March for Desktop Kaveri and April/May for Mobile. Yeah.... perhaps you should think about that.
The Kabini article never said, "just wait and see what Intel has coming!" so much as it said, "We need to see the actual notebooks to see how this plays out, and with Intel's Celeron and Pentium ULV parts are already at Kabini's expected price point, it's a tough row to hoe." Kabini is great as an ARM or Atom competitor; it's not quite so awesome compared to Core i3, unless the OEMs pass the price savings along in some meaningful way. I'd take Kabini with a better display over Core i3 ULV, but I'll be shocked if we actually see a major OEM do Kabini with a quality 1080p panel for under $500.
It would be appreciated just placing all the possible matches on the table and a paragraph with selection criteria for the review making the choices dispelling opinion of missing any models
Yes really disappointed there is no socketed cpu solution that have the best igpu config.
But I suppose I already have Ivy Bridge i5 for my WMC pc and it is good enough. Still be a nice cheap way to build a secondary small desktop that could also do some light gaming.
Curious why Intel just doesn't go straight for the jugular and release a discrete GPU part on their 22nm process. NVidia/AMD is stuck at 28mm because of their foundries, and it appears Intel's GPU architecture is feature complete and therefore competitive with the discrete parts if they scaled up everything by 4x or 5x.
NVidia & AMD should be worried about their core high-profit-margins business!
The photo you have on page 4 showing the 2 separate die is strange. The haswell die should not be square. Other photos I have seen show the expected (extremely rectangular) haswell die and a tiny ram chip. I would expect a haswell based chip with double the cpu (8 real cores), and no gpu eventually; this would be almost square. Do you know why your chip does not match other multi-chip module photos online?
I guess the other photos are haswell plus an integrated chipset in the same module. The photo of the two die is still strange, as neither of these look like a haswell die.
That's because that's the picture for GT3e Iris Pro 5200 graphics. The bigger square die is the Haswell CPU+GT3 GPU, while the smaller one is the on-package DRAM.
The dual core with on-package chipset is even longer than the regular Haswell.
Yes it should, you're thinking of the ultrabook chips with a controller to the side, not eDRAM. Those ones are rectangular. Look at a haswell MBP 15" teardown to verify.
This is useless at anything above 1366x768 for games (and even that is questionable as I don't think you were posting minimum fps here). It will also be facing richland shortly not AMD's aging trinity. And the claims of catching a 650M...ROFL. Whatever Intel. I wouldn't touch a device today with less than 1600x900 and want to be able to output it to at least a 1080p when in house (if not higher, 22in or 24in). Discrete is here to stay clearly. I have an Dell i9300 (Geforce 6800) from ~2005 that is more potent and runs 1600x900 stuff fine, I think it has 256MB of memory. My dad has an i9200 (radeon 9700pro with 128mb I think) that this IRIS would have trouble with. Intel has a ways to go before they can claim to take out even the low-end discrete cards. You are NOT going to game on this crap and enjoy it never mind trying to use HDMI/DVI out to a higher res monitor at home. Good for perhaps the NICHE road warrior market, not much more.
But hey, at least it plays quite a bit of the GOG games catalog now...LOL. Icewind Dale and Baldur's gate should run fine :)
Shimpi's guess as to what will go into the 15-inch rMBP is interesting, but I have a gut feeling that it will not be the case. Despite the huge gains that Iris Pro has over the existing HD 4000, it is still a step back from last year's GT 650M. I doubt Apple will be able to convince its customers to spend $2199 on a computer that has less graphics performance than last year's (now discounted) model. Despite its visual similarity to an Air, the rMBP still has performance as a priority, so my guess is that Apple will stick to discrete for the time-being.
That being said, I think Iris Pro opens up a huge opportunity to the 15-inch rMBP lineup, mainly a lower entry model that finally undercuts the $2000 barrier. In other words, while the $2199 price point may be too high to switch entirely to iGPU, Apple might be able to pull it off at $1799. Want a 15-inch Retina Display? Here's a more affordable model with decent performance. Want a discrete GPU? You can get that with the existing $2199 price point.
As far as the 13-inch version is concerned, my guesses are rather murky. I would agree with the others that a quad-core Haswell with Iris Pro is the best-case scenario for the 13-inch model, but it might be too high an expectation for Apple engineers to live up to. I think Apple's minimum target with the 13-inch rMBP should be dual-core Haswell with Iris 5100. This way, Apple can stick to a lower TDP via dual-core, and while Iris isn't as strong as Iris Pro, its gain over HD 4000 is enough to justify the upgrade. Of course, there's always the chance that Apple has temporary exclusivity on an unannounced dual-core Haswell with Iris Pro, the same way it had exclusivity with ULV Core 2 Duo years ago with MBA, but I prefer not to make Haswell models out of thin air.
You are assuming that the next MBP will have the same chasis size. If thin is in, the dGPU-less Iris Pro is EXTREMELY attractive for heat/power considerations..
More likely is the end of the thicker MBP and separate thin MBAir lines. Almost certainly, starting in two weeks we have just one line, MBP all with retina, all the thickness of MBAir. 11" up to 15"..
So the one you pick is the worst of the bunch to show GPU power....jeez. You guys clearly have a CS6 suite lic so why not run Adobe Premiere which uses Cuda and run it vs the same vid render you use in Sony's Vegas? Surely you can rip the same vid in both to find out why you'd seek a CUDA enabled app to rip with. Handbrake looks like they're working on supporting Cuda also shortly. Or heck, try FREEMAKE (yes free with CUDA). Anything besides ignoring CUDA and acting like this is what a user would get at home. If I owned an NV card (and I don't in my desktop) I'd seek cuda for everything I did that I could find. Freemake just put out another update 5/29 a few days ago. http://www.tested.com/tech/windows/1574-handbrake-... 2.5yrs ago it was equal, my guess is they've improved Cuda use by now. You've gotta love Adam and Jamie... :) Glad they branched out past just the Mythbusters show.
I have a bad suspicion one of the reasons why you won't see a desktop Haswell part with eDRAM is that it would pretty much euthanize socket 2011 on the spot.
IF Intel does actually release a "K" part with it enabled, I wonder how restrictive or flexible the frequency ratios on the eDRAM will be?
Speaking of socket 2011, I wonder if/when Intel will ever refresh it from Sandy-E?
I wouldn't call myself an expert on computer hardware, but isn't it possible that Iris Pro's bottleneck at 1600x900 resolutions could be attributed to insufficient video memory? Sure, that eDRAM is a screamer as far as latency is concerned, but if the game is running on higher resolutions and utilising HD textures, that 128MB would fill up really quickly, and the chip would be forced to swap often. Better to not have to keep loading and unloading stuff in memory, right?
Others note the similarity between Crystalwell and the Xbox One's 32MB Cache, but let's not forget that the Xbox One has its own video memory; Iris Pro does not, or put another way, it's only got 128 MB of it. In a time where PC games demand at least 512 MB of video RAM or more, shouldn't the bottleneck that would affect Iris Pro be obvious? 128 MB of RAM is sure as hell a lot more than 0, but if games demand at least four times as much memory, then wouldn't Iris Pro be forced to use regular RAM to compensate, still? This sounds to me like what's causing Iris Pro to choke at higher resolutions.
If I am at least right about Crystalwell, it is still very impressive that Iris Pro was able to get in reach of the GT 650M with so little memory to work with. It could also explain why Iris Pro does so much better in Crysis: Warhead, where the minimum requirements are more lenient with video memory (256 MB minimum). If I am wrong, however, somebody please correct me, and I would love to have more discussion on this matter.
The video memory is stored in main memory being it 4GB and above...(so minspecs of crysis are clearly met)... the point is bandwidtht. The article is telling there are roughly 50GB/s when the cachè is run with 1.6 Ghz. So ramping it up in füture makes the new Iris 5300 i suppose.
Video cards may have 512MB to 1GB of video memory for marketing purposes, but you would be hard pressed to find a single game title that makes use of more than 128.
Uhh, what? Games can use far more than that, seeing them push past 2GB is common. But what matters is how much of that memory needs high bandwidth, and that's where 128MB of cache can be a good enough solution for most games.
As soon as intel CPUs have video performance that exceeds NVidia and AMD flagship video cards I'll get excited. Until then I think of them as something to be disabled on workstations and to be tolerated on laptops that don't have better GPUs on board.
Great success by Intel. 4600 is near GT630 and HD4650 (much better than 6450 which sells for $15 at newegg) 5200 is better than GT640 and HD 6670 (currently sells like $50 at newegg) Intel's intergrated used to be worthless comparing with discret cards. It slowly catches up during the past 3 years, and now 5200 is beating a $50 card. Can't wait for next year! Hopefully this will finally push AMD and Nvidia to come up with meaningful upgrade to their low level product lines.
A quick check for my own sanity: Did you configure the A10-5800K with 4 sticks of RAM in bank+channel interleave mode, or did you leave it memory bandwidth starved with 2 sticks or locked in bank interleave mode?
The numbers look about right for 2 sticks, and if that is the case, it would leave Trinity at about 60% of its actual graphics performance.
I find it hard to believe that the 5800K is about a quarter the performace per watt of the 4950HQ in graphics, even with the massive, server-crushing cache.
Well, my Asus G50VT laptop is officially obsolete! A Nvidia 512MB GDDR3 9800gs is completely pwned by this integrated GPU, and, the CPU is about 50-65% faster clock for clock to the last generation Core 2 Duo Penryn chips. Sure, my X9100 can overclock stably to 3.5GHZ but this one can get close even if all cores are fully taxed.
Can't wait to see what the Broadwell die shrink brings, maybe a 6-core with Iris or a higher clocked 4-core?
I too see that dual core versions of mobile Haswell with this integrated GPU would be beneficial. Could go into small 4.5 pounds laptops.
AMD has to create a Crystalwell of their own. I never thought Intel could beat them to it since their integrated GPUs always has needed bandwidth ever since.
They also need to find a way past their manufacturing process disadvantage, which may not be possible at all. We're comparing 22nm Apples to 32/28nm Pears here; it's a relevant comparison because those are the realities of the marketplace, but it's worth bearing in mind when comparing architecture efficiencies.
"What Intel hopes however is that the power savings by going to a single 47W part will win over OEMs in the long run, after all, we are talking about notebooks here." This plus simpler board designs and fewer voltage regulators and less space used. And I agree, I want this in a K-SKU.
And doesn't MacOS support Optimus? RE: "In our 15-inch MacBook Pro with Retina Display review we found that simply having the discrete GPU enabled could reduce web browsing battery life by ~25%."
Those are strong words in the end, but i agree Intel should make a K-series CPU with Crystalwell. What comes to mind is they may be doing that for Broadwell.
The Iris Pro solution with eDRAM looks like a nice fit for what i want in my notebook upgrade coming this fall. I've been getting by on a Core2Duo laptop, and didn't go for Ivy Bridge because there were no good models with a 1920x1200 or 1920x1080 display without dedicated graphics. For a system that will not be used for gaming at all, but needs resolution for productivity, it wasn't worth it. I hope this will change with Haswell, and that i will be able to get a 15" laptop with >= 1200p without dedicated graphics. 4950HQ or 4850HQ seems like an ideal fit. I don't mind spending $1500-2000 for a high quality laptop :)
You got the FLOPs rating wrong on the Sandy Bridge parts. They are at 1/2 of Ivy Bridge.
1350MHz with 12 EUs and 8 FLOPs/EU will result in 129.6GFlops. While its true in very limited scenarios Sandy Bridge's iGPU can co-issue, its small enough to be non-existent. That is why a 6EU HD 2500 comes close to 12EU HD 3000.
If they use only the HD4600 and Iris Pro that'd probably be better. As long as it's clearly labeled on laptops. HD 4600 Pro (don't expect to do any video work on this) Iris Pro (it's passable in a pinch).
But I don't think that's what's going to happen. Iris Pro could be great for Ultrabooks; I don't really see any use outside of that though. A low end GT740M is still a better option in any laptop that has the thermal room for it. Considering you can put those in 14" or larger ultrabooks I still think Intel's graphics aren't serious. Then you consider the lack of Compute, PhysX, Driver optimization, game specific tuning...
Good to see a hefty performance improvement. Still not good enough though. Also pretty upsetting to see how many graphics SKU's they've released. OEM'S are gonna screw people who don't know just to get the price down.
The SKU price is 500 DOLLARS!!!! They're charging you 200 bucks for a pretty shitty GPU. Intel's greed is so disgusting it over rides the engineering prowess of their employees. Truly disgusting Intel; to charge that much for that level of performance. AMD we need you!!!!
May i ask a noob question? Question: Do we have no i5s, i7s WITHOUT on board graphics any more? As a gamer i'd prefer to have a CPU + discrete GPU in my gaming machine and i don't like to have extra stuff stuck on the CPU, lying there consuming power and having no use (for my part) whatsoever. No ivy bridge or haswell i5s, i7s without iGPU or whatever you call it?
WHY THE HELL ARE THOSE SO EXPENSIVE!!!!! Holy SHIT! 500 dollars for a 4850HQ? They're charging you 200 dollars for a shitty GPU with no dedicated RAM at all! Just a cache! WTFF!!!
Intel's greed is truly disgusting... even in the face of their engineering prowess.
What I don't understand is why Intel didn't do a "next-gen console like processor". Like takeing the 4770R and doubling the GPU or een quadrupling, wasn't there space? The thermal headroom must have been there as we are used to CPUs with as high as 130W TDP. Anyhow, combining that with awesome drivers for Linux would have been a real competition to AMD/PS4/XONE for Valve/Steam. A complete system under 150w capable of awesome 1080p60 gaming.
So now I am looking for the best performing GPU under 75W, ie no external power. Which is it, still the Radeon HD7750?
Without a direct comparison between HD 5000/5100 and Iris Pro 5200 with Crystalwell, how can we conclude that Crystalwell has any effect in any of the game benchmarks? While it clearly is of benefit in some compute tasks, in the game benchmarks you only compare to HD 4600 with half as many EU's and to Nvidia and AMD with their different architectures.
We really need to see Iris Pro 5200 vs HD5100 to get an apples to apples comparison and be able to determine if Crystalwell is worth the extra money.
" An Ultrabook SKU with Crystalwell would make a ton of sense, but given where Ultrabooks are headed (price-wise) I’m not sure Intel could get any takers."
They sure seem to be going up in price, rather than down at the moment...
Intel has once again made their naming so confusing that even their own marketing weasels can't get it right. Notice that the Intel slide titled "4th Gen Intel Core Processors H-Processors Line" calls the graphics in the i7-4950HQ and i7-4850HQ "Intel HD Graphics 5200" instead of the correct name which is "Intel Iris Pro Graphics 5200". This slide calls the graphics in the i7-4750HQ "Intel Iris Pro Graphics 5200" which indicates that the slide was made after the creation of that name. It is little wonder that most media outlets are acting as if the biggest tech news of the month is the new pastel color scheme in iOS 7.
The peak theoretical GPU performance calculations shown are wrong for Intel's GFLOPS numbers. Correct numbers are half of what is shown. The reason is that Intel's execution units are made of of an integer vec4 processor and a floating-point vec4 processor. This article correctly states it has a 2xvec4 SIMD, but does not point out that half is integer and half is floating-point. For a GFLOPS computation, one should only include the floating-point operations, which means only half of that execution unit's silicon is getting used. The reported computation performance would only be correct if you had an algorithm with a perfect mix of integer & float math that could be co-issued. To compare apples to apples, you need to stick to GFLOPS numbers, and divide all the Intel numbers in the table by 2. For example, peak FP ops on the Intel HD4000 would be 8, not 16. Compared this way, Intel is not stomping all over AMD & nVidia for compute performance, but it does appear they are catching up.
I think Intel could push their yield way up by offering 32MB and 64MB versions of Crystalwell for i3 and i5 processors. They could charge the same markup for the 128, but sell the 32/64 for cheaper. It would cost Intel less and probably let them take even further market share from low-end dGPUs.
It is funny how a non-PC company changed the course of Intel forever for the good. I hope that Intel is wise enough to use this to spring-board the PC industry to a new, grand future. No more tick-tock nonsense arranged around sucking as many dollars out of the customer as possible, but give the world the processing power it craves and needs to solve the problems of tomorrow. Let this be your heritage and your profits will grow to unforeseen heights. Surprise us!
I wonder where this is going. Yes the multi core and cache on hand and graphics may be goody, ta. But human interaction in actual products? I weigh in at 46kg but think nothing of running with a Bergen/burden of 20kg so a big heavy laptop with ingratiated 10hr battery and 18.3" would be efficacious. What is all this current affinity with small screens? I could barely discern the vignette of the feathers of a water fowl at no more than 130m yesterday, morning run in the Clyde Valley woodlands. For the "laptop", > 17" screen, desktop 2*27", all discernible pixels, every one of them to be a prisoner. 4 core or 8 core and I bore the poor little devils with my incompetence with DSP and the Julia language. And spice etc.
P.S. Can still average 11mph @ 50+ years of age. Some things one does wish to change. And thanks to the Jackdaws yesterday morning whilst I was fertilizing a Douglas Fir, took the boredom out of a another wise perilous predicament.
Hello, Look, 99% of all the comments here are out of my league. Could you answer a question for me please? I use an open source 3d computer animation and modeling program called Blender3d. The users of this program say that the GTX 650 is the best GPU for this program, siting that it works best for calculating cpu intensive tasks such as rendering with HDR and fluids and other particle effects, and they say that other cards that work great for gaming and video fall short for that program. Could you tell me how this Intel Iris Pro would do in a case such as this? Would your test made here be relevant to this case?
Same here johncaldwell. I would like to know the same.
I am a Blender 3d user and work on cycles render which also uses the GPU to process its renders. I am planning to invest in a new workstation.. either a custome built hardware for a linux box or the latest Macbook Pro from Apple. In case of latter, how useful will it be, in terms of performance for GPU rendering on Blender.
Wow I cant believe I understood this, My computer archieture class paid off... except I got lost when they were talking about n1 n2 nodes.... that must have been a post 2005 feature in CPU N bridge S Bridge Technology
I don't think you understand the difference between DRAM circuitry and arithmetic circuitry. A DRAM foundry process is tuned for high capacitance so that the memory lasts longer before refresh. High capacitance is DEATH to high-speed circuitry for arithmetic execution, that circuitry is tuned for very low capacitance, ergo, tuned for speed. By using DRAM instead of SRAM (which could have been built on-chip with low-capacitance foundry processes), Intel enlarged the cache by 4x+, since an SRAM cell is about 4x+ larger than a DRAM cell.
Not a bad GPU at all, On a small laptop screen you can game just fine, But it should be paired with a lower CPU, And the i3, i5, i7 should have Nvidia or AMD solutions.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
177 Comments
Back to Article
tipoo - Saturday, June 1, 2013 - link
It still seems to me that this misses where it would benefit most: 13 inch laptops, which currently mostly use dual core processors. GT3e would make something like the Retina MBP 13" much more appealing for instance, but it's paired with processors such that the wattage would be too high.tipoo - Saturday, June 1, 2013 - link
Oh and I wanted to ask, if the integrated graphics are disabled can the CPU still tap into the eDRAM?Ryan Smith - Saturday, June 1, 2013 - link
Yes, it's a dedicated cache for both the CPU and the GPU. However it's very unlikely you're going to run into any scenario that uses a Crystalwell-equipped part in such a manner. It's not being sold in socket form, so it will go to OEMs, who in turn would only use it if they didn't include a dGPU.jeffkibuule - Saturday, June 1, 2013 - link
So pretty much, unless you've got some huge beefy GPU that would absolutely suck up power compared to just using Iris Pro graphics, no one would opt for that SKU?shiznit - Saturday, June 1, 2013 - link
Right on. A dual core model for the 13" rMPB would have me selling my 2012 immediately. Now I need to decide if I can live with the 15" or even bother.moep - Saturday, June 1, 2013 - link
If i interpreted the results of this article correctly, I suspect that the 15" MBP is probably going to get a new and even thinner form factor with this refresh. (one chip less, fewer VRM related parts, lower combined TDP)A 15" rMBP approaching the weight of a 15" Macbook Air would be very interesting, although a part of me hoped that Apple would wait until Broadwell to ditch the dGPU in the 15".
Such a step back in GPU performance with the Retina display is surely not going to be very pleasant in 3D applications.
Galatian - Saturday, June 1, 2013 - link
I actually hope/suspect, that Apple will go the other road: use a discrete graphic solution on the 15" rMBP until Broadwell comes out, but have a cTDPdown version of the 4850HQ on the 13" rMBP. Maybe they can even get the normal TDP version in there; after all it has the same (good) cooling the 15" rMBP has and I have never heard the fans on mine. I think Apple really designed it with Haswell in mind, so let's see what they'll bring on during the next few weeks.tipoo - Saturday, June 1, 2013 - link
That's certainly the best case, I really hope they go down that road. The rMBP as a quad with Iris Pro would really make it worth the Pro name.vFunct - Sunday, June 2, 2013 - link
They'll probably stick with the built in GPU for the 13" model and a discrete GPU for the 15" model, which is what they do right now.Apple's top-end MacBook Pro has always had the highest end discrete GPU available.
Spunjji - Tuesday, June 4, 2013 - link
I'm guessing you mean "for a given power usage", as there are definitely faster GPUs out there than the 650M.Elitehacker - Tuesday, September 24, 2013 - link
Even for a given power usage the 650M isn't even to on the top of the list for highest end discrete GPU.... The top at the moment for lowest wattage to power ratio would be the 765M, even the Radeon HD 7750 has less wattage and a tad more power than the 650M. Clearly someone did not do their researching before opening their mouth.I'm gonna go out on a limb and say that vFunct is one of those Apple fanboys that knows nothing about performance. You can get a PC laptop in the same size and have better performance than any Macbook available for $500 less. Hell you can even get a Tablet with an i7 and 640M that'll spec out close to the 650M for less than a Macbook Pro with 650M.
Eric S - Tuesday, June 25, 2013 - link
The Iris Pro 5200 would be ideal for both machines. Pro users would benefit from ECC memory for the GPU. The Iris chip uses ECC memory making it ideal for OpenCL workloads in Adobe CS6 or Final Cut X. Discrete mobile chips may produce errors in the OpenCL output. Gamers would probably prefer a discrete chip, but that isn't the target for these machines.Eric S - Monday, July 1, 2013 - link
I think Apple cares more about the OpenCL performance which is excellent on the Iris. I doubt the 15" will have a discrete GPU. There isn't one fast enough to warrant it over the Iris 5200. If they do ever put a discrete chip back in, I hope they go with ECC GDDR memory. My guess is space savings will be used for more battery. It is also possible they may try to reduce the display bezel.emptythought - Tuesday, October 1, 2013 - link
It's never had the highest end chip, just the best "upper midrange" one. Above the 8600m GT was the 8800m GTX and GTS, and above the 650m there was the 660, a couple 670 versions, the 675 versions, and the 680.They chose the highest performance part that hit a specific TDP, stretching a bit from time to time. It was generally the case that anything which outperformed the MBP was either a thick brick, or had perpetual overheating issues.
CyberJ - Sunday, July 27, 2014 - link
Not even close, but whatever floats you boat.emptythought - Tuesday, October 1, 2013 - link
It wouldn't surprise me if the 15in just had the "beefed up" iris pro honestly. They might even get their own, special even more overclocked than 55w version.Mainly, because it wouldn't be without precedent. Remember when the 2009 15in macbook pro had a 9400m still? Or when they dropped the 320m for the hd3000 even though it was slightly slower?
They sometimes make lateral, or even slightly backwards moves when there are other motives at play.
chipped - Sunday, June 2, 2013 - link
That's just crazy talk, they want drop dedicated graphics. The difference is still too big, plus you can't sell a $2000 laptop without a dedicated GFX.shiznit - Sunday, June 2, 2013 - link
considering Apple specifically asked for eDRAM and since there is no dual core version yet for the 13", I'd say there is very good chance.mavere - Sunday, June 2, 2013 - link
"The difference is still too big"The difference in what?
Something tells me Apple and its core market is more concerned with rendering/compute performance more than Crysis 3 performance...
iSayuSay - Wednesday, June 5, 2013 - link
If it plays Crysis 3 well, it can render/compute/do whatever intensive fine.virgult - Saturday, August 31, 2013 - link
Nvidia Kepler plays Crysis 3 well but it sucks insanely hard at computing and rendering.Eric S - Wednesday, July 3, 2013 - link
It appears to do compute better then graphics (and ECC memory is a plus for compute). That is exactly what pros will be looking for. Apple doesn't cater to the gaming market with these machines even if they should play most games fine. A dedicated gaming machine would be built much different then this.jasonelmore - Sunday, June 2, 2013 - link
This, I dont know about anyone else, but i'm not dropping 2 grand or $2700 with upgrades on a 15 incher that does not have dedicated graphics.Another problem i see is the 13" Retina only uses duals, and if they did use this quad with GT3e silicon, then the price of of the 13" will go up at least $150 since the i7's and i5's the 13" currently use, are sub $300 parts.
The only solution i see is Apple offering it as a build to order/max upgrade option, and even then they risk segmentation across the product line.
fteoath64 - Monday, June 3, 2013 - link
"can't sell a $2000 laptop without a dedicated GFX". Absolutely true, especially when the GT3e is still a little slower than the 650M. So the 750M tweaked a few mhz higher will do nicely for the rMBP. The 13 incher will get a boost with the GT3e CPU. So a slight upgrade to lower power cpu maybe worthwhile to some. Improvement to 1080p eyesight camera would be a given for the new rMBP.Eric S - Wednesday, July 3, 2013 - link
You can drop discrete graphics when that $2000+ laptop is using builtin graphics with the same price premium and number of transistors of the discrete chip. I'm almost positive the discrete will go away. I have a feeling that Apple had a say in optimizations and stressed OpenCL performance. That is probably what they will highlight when they announce a new MacBook Pro.xtc-604 - Saturday, June 8, 2013 - link
I really hope that Apple continues to treat the rMBP 15 as a flagship. Giving it iGPU only would be a deal breaker for many professionals. Atleast in haswell's current form. Until Intel can make an IGPU that atleast matches or exceeds performance at high resolutions, it is still a no go for me.Eric S - Wednesday, July 3, 2013 - link
Why is that a deal breaker? The Iris 5200 is better then a discrete chip for compute (OpenCL). If you are doing 3D rendering, video editing, photoshop, bioinformatics, etc. that is what you should care about. It also has ECC memory unlike a discrete chip so you know your output is correct. How fast it can texture triangles is less important. It still has plenty of power in that area for any pro app. This is not designed to be a gaming machine. Not sure why anyone would be surprised it may not be optimized for that.Eric S - Monday, July 1, 2013 - link
You never know, but I doubt it. They will have trouble with the ports on the side if they make it smaller. I think it is more likely the space saving will go to additional battery. They may be able to get similar battery life increases to the Air with the extra space.mikeztm - Tuesday, June 4, 2013 - link
Notice that the 13" 2012 rMBP is a little thicker than the 15" version. Quad core in 13 inch may be planned at the very beginning.axien86 - Saturday, June 1, 2013 - link
Look at the overheating issues that come with i5/i7 Razer notebooks and finding the same heating noticed in their Haswell notebook press event several days ago.
If Apple decides to use these Haswells which put out heat in a concentrated area and in very thin outlines, you are essentially computing over a mini-bake oven.
jasonelmore - Sunday, June 2, 2013 - link
Looking at the prices, this will raise the price or Lower the margins of the 13" Retina Macbook Pro by about $150 each.mschira - Sunday, June 2, 2013 - link
Yea laptops benefit most - good for them.But what about the workstation?
So intel stopped being a CPU company and turned into a mediocre GPU company? (can even beat last years GT650M)
I would applaude the rise in GPU performance if they had not completely forgotten the CPU.
M.
n13L5 - Monday, June 3, 2013 - link
You're exactly right.13" ultrabook buyers who need it the most get little to nothing out of this.
And desktop users don't need or want GT3e and it uses system RAM. Better off buying a graphics card instead of upgrading to Haswell on desktops.
glugglug - Tuesday, June 4, 2013 - link
While I agree this misses "where it would benefit most", I disagree on just *where* that is.I guess Intel agrees with Microsofts implicit decision that media center is dead. Real-time HQ quicksync would be perfect to transcode anything extenders couldn't handle, and would also make the scanning for and skipping of commercials incredibly efficient.
n13L5 - Tuesday, June 11, 2013 - link
Core i5…4350U…Iris 5000…15W…1.5 GHzCore i7…4550U…Iris 5000…15W…1.5 GHz
Core i7…4650U…Iris 5000…15W…1.7 GHz
These should work. The 4650U is available in the Sony Duo 13 as we speak, though at a hefty price tag of $1,969
Eric S - Monday, July 1, 2013 - link
The last 13" looks like they were prepping it for a fusion drive then changed their mind leaving extra space in the enclosure. I think it is due for an internal redesign that could allow for a higher wattage processor.I think the big deal is the OpenCL performance paired with ECC memory for the GPU. The Nvidia discrete processor uses non-ECC GDDR. This will be a big deal for users of Adobe products. Among other things, this solves the issue of using the Adobe mercury engine with non-ECC memory and the resulting single byte errors in the output. The errors are not a big deal for games, but may not be ideal for rendering professional output and scientific applications. This is basically a mobile AMD FireGL or Nvidia Quadro card. Now we just need OpenCL support for the currently CUDA-based mercury engines in After Effects and Premiere. I have a feeling that is coming or Adobe will also lose Mercury Engine compatibility with the new Mac Pro.
tviceman - Saturday, June 1, 2013 - link
Impressive iGPU performance, but I knew Intel was absolutely full of sh!t when claiming equal to or better than GT 650m performance. Not really even close, typically behind by 30-50% across the board.Krysto - Saturday, June 1, 2013 - link
When isn't Intel full of shit? Always take what the improvements they claim and cut it in half, and you'll be a lot closer to reality.xtc-604 - Saturday, June 8, 2013 - link
Lol...you think that's bad? Look at Apple's claims. "over 200 new improvements in Mountain Lion"piroroadkill - Saturday, June 1, 2013 - link
sh<exclamation point>t? What are we? 9?kyuu - Saturday, June 1, 2013 - link
It's probably habit coming from eluding censoring.maba - Saturday, June 1, 2013 - link
To be fair, there is only one data point (GFXBenchmark 2.7 T-Rex HD - 4X MSAA) where the 47W cTDP configuration is more than 40% slower than the tested GT 650M (rMBP15 90W).Actually we have the following [min, max, avg, median] for 47W (55W):
games: 61%, 106%, 78%, 75% (62%, 112%, 82%, 76%)
synth.: 55%, 122%, 95%, 94% (59%, 131%, 102%, 100%)
compute: 85%, 514%, 205%, 153% (86%, 522%, 210%, 159%)
overall: 55%, 514%, 101%, 85% (59%, 522%, 106%, 92%)
So typically around 75% for games with a considerably lower TDP - not that bad.
I do not know whether Intel claimed equal or better performance given a specific TDP or not. With the given 47W (55W) compared to a 650M it would indeed be a false claim.
But my point is, that with at least ~60% performance and typically ~75% it is admittedly much closer than you stated.
whyso - Saturday, June 1, 2013 - link
Note your average 650m is clocked lower than the 650m reviewed here.lmcd - Saturday, June 1, 2013 - link
If I recall correctly, the rMBP 650m was clocked as high as or slightly higher than the 660m (which was really confusing at the time).JarredWalton - Sunday, June 2, 2013 - link
Correct. GT 650M by default is usually 835MHz + Boost, with 4GHz RAM. The GTX 660M is 875MHz + Boost with 4GHz RAM. So the rMBP15 is a best-case for GT 650M. However, it's not usually a ton faster than the regular GT 650M -- benchmarks for the UX51VZ are available here:http://www.anandtech.com/bench/Product/814
tipoo - Sunday, June 2, 2013 - link
I think any extra power just went to the rMBP scaling operations.DickGumshoe - Sunday, June 2, 2013 - link
Do you know if the scaling algorithms are handled by the CPU or the GPU on the rMBP?The big thing I am wondering is that if Apple releases a higher-end model with the MQ CPU's, would the HD 4600 be enough to eliminate the UI lag currently present on the rMBP's HD 4000?
If it's done on the GPU, then having the HQ CPU's might actually get *better* UI performance than the MQ CPU's for the rMPB.
lmcd - Sunday, June 2, 2013 - link
No, because these benchmarks would change the default resolution, which as I understand is something the panel would compensate for?Wait, aren't these typically done while the laptop screen is off and an external display is used?
whyso - Sunday, June 2, 2013 - link
You got this wrong. 650m is 735/1000 + boost to 850/1000. 660m is 835/1250 boost to 950/1250.jasonelmore - Sunday, June 2, 2013 - link
worst mistake intel made was that demo with DIRT when it was side by side with a 650m laptop. That set people's expectations. and it falls short in the reviews and people are dogging it. If they would have just kept quite people would be praising them up and down right now.Old_Fogie_Late_Bloomer - Monday, June 3, 2013 - link
The performance isn't earth-shattering, but if Intel manages to put out good open-source Linux drivers for Iris Pro, I can't help but feel like this would be a great chip for that; it isn't like you'll be playing Crysis in Ubuntu anytime soon. I kind of want that CRB (or something like it), actually.tviceman - Saturday, June 1, 2013 - link
I'll bet notebooks with mid-range quad core CPU's and gt 750m discrete graphics will be cheaper than notebooks with Iris Pro enabled iGPU graphics as well. The only benefit would be a slightly slimmer chassis and battery life. Anyone who still wants to game on a notebook is noticeably better off with a mid-range discrete GPU over this.esterhasz - Saturday, June 1, 2013 - link
On page four, the ominous launch partner is not "keen" rather than "key", I guess. I'd be very keen on having that rMBP 13" with IP5200, though.Ryan Smith - Saturday, June 1, 2013 - link
Noted and fixed. Thank you.tipoo - Saturday, June 1, 2013 - link
I'm very much in that boat too, a quad core 13" rMBP with Iris Pro would put it over the top.MattVincent - Wednesday, June 12, 2013 - link
totally agree. I wonder if apple will actually put a quad core in the 13" though. I bet they would rather sell more 15" rmbp'sjeffkibuule - Saturday, June 1, 2013 - link
Would a 47W chip be able to fit into a normal 13" Ultrabook-like chassis like the 13" MacBook Pro with Retina Display? Only an extra 12W TDP to deal with.esterhasz - Saturday, June 1, 2013 - link
This would be awesome and we have to remember that the 47W TDP includes voltage regulation moving off the MB, so the gap is maybe only 8W. The 47 TDP also refers to both CPU and GPU running at full speed, which is an extremely rare scenario - in gaming, the CPU load will probably hover at 50% only.In any case, if the tested model goes into a rMBP 13" I'm going to buy it before Tim Cook has left the stage.
nofumble62 - Saturday, June 1, 2013 - link
Thinking to buy a Ivybridge Mac Book Pro for my wife, I guess she will have wait a little longer for this baby. I wish they could fit in a Mac Book Air.jeffkibuule - Saturday, June 1, 2013 - link
Look at the price of those chips though, you're going to be dropping at least $2000 on such a laptop when the CPU alone is $478.tipoo - Saturday, June 1, 2013 - link
I really hope so, the Retina Macbook Pro 13" would get a whole lot more appealing with quad core and Iris Pro.DanNeely - Saturday, June 1, 2013 - link
Probably; easily if anand is right about Apple deciding it's good enough to drop the dGPU. Worst case would be Apple taking advantage of the adjustable TDP options to tune the CPU performance/tdp down a bit.Gaugamela - Saturday, June 1, 2013 - link
Really impressive!This focus of Intel on graphics will force Nvidia and AMD to push dedicated GPUs forward at a much faster pace at the risk of being destroyed by Intel iGPUs. This couldn't come at a better time with the advent of high resolution screens in notebooks and displays (that new 4K Asus monitor).
AMD will need to bring Kaveri with a monster of a iGPU otherwise Intel just nullified the only area where they had any type of advantage.
Blibbax - Saturday, June 1, 2013 - link
I question how much more can be had from APU graphics with the bandwidth restrictions of 64-bit DDR3.silverblue - Saturday, June 1, 2013 - link
Iris Pro is exceptionally good, however you have to ask how much faster the 7660D would be with the same memory bandwidth advantage. Additionally, Trinity is hardly going to be in the same sort of systems, and as the GPU is being held back by the CPU part anyway, it does take a little shine off Iris Pro's astounding performance. Even so, well done Intel, on both the hardware and software fronts.trulyuncouth1 - Saturday, June 1, 2013 - link
I think its kind of a moot point, Selling something this expensive will not affect AMD or even Nvidia that much. You can get an entire AMD APU based notebook for the cost of just this processor. I love the idea of this being pushed forward but unless Intel can bring it to a lower price point its kind of pointless.ilkhan - Saturday, June 1, 2013 - link
Im probably unique in that I want a quad haswell with the 20EU graphics and a GTX760m dGPU from a latitude (dock!) E6540. Wonder if thats going to happen. Probably not.Still, this looks damn good for Intel and will only improve over time.
lmcd - Sunday, June 2, 2013 - link
Howabout, rather, a 760 dGPU from a latitude dock? A bit more appealing :-)Zandros - Saturday, June 1, 2013 - link
Performance roughly in line with expectations, although the compute performance is a nice surprise. It seems to me like Crystalwell is going into exactly the wrong SKUs and the pricing is borderline atrocious, too.Anyway, since you bring up the awards and a "new system" for them, something I've been thinking a bit about is how there doesn't seem to be a page on the site where it is explained what each award is supposed to mean and collects all the products that have received them, which I think would be nice.
kallogan - Saturday, June 1, 2013 - link
Where is da power consumption ??????whyso - Saturday, June 1, 2013 - link
They are completely different systems making power consumption values irrelevant.codedivine - Saturday, June 1, 2013 - link
Hi folks. Can you post the OpenCL extensions supported? You can use something like "GPU Caps viewer" from Geeks3d.tipoo - Saturday, June 1, 2013 - link
Interesting that the compute is punches above it's game performance weight. I wonder if they could put more EUs in a chip, maybe a larger eDRAM, and put it on a board as a compute card.lmcd - Saturday, June 1, 2013 - link
They already have a compute card called Xeon Phi if I remember correctly.Klimax - Sunday, June 2, 2013 - link
Different Arch (X86 in Phi)tipoo - Sunday, June 2, 2013 - link
I'm aware, but the Xeon Phi requires completely different programming than a GPU like this which can just use OpenCL.Soul_Master - Saturday, June 1, 2013 - link
What's your point for comparing desktop GPU with middle-range mobile GPU? CPU on both devices are not equal.Soul_Master - Saturday, June 1, 2013 - link
Sorry. I misunderstood about i7 4950HQ process, a high-end quad-core processor for laptops.Ryan Smith - Sunday, June 2, 2013 - link
It's what we had available. We wanted to test a DDR3 version of GK107, and that's what was on-hand.tipoo - Saturday, June 1, 2013 - link
Hmm, so it's heavily hinted at that the next rMBP will ditch discreet graphics. The 5200 is good, but that would still be a regression in performance. Not the first time Apple would have done that, there was the Radeon cut out of the Mini, the 320M to the 3000, even the bottom rung of the newest iMac with the 640m. I wonder if it would at least be cheaper to make up for it.beginner99 - Saturday, June 1, 2013 - link
Impressive...if you ignore the pricing.tipoo - Sunday, June 2, 2013 - link
?velatra - Saturday, June 1, 2013 - link
On page 4 of the article there 's a word "presantive" which should probably be "representative."jabber - Saturday, June 1, 2013 - link
May I ask why The Sims is never featured in your reviews on such GPU setups?Why? Well in my line of business, fixing and servicing lots of laptops with the integrated chips the one game group that crops up over and over again is The Sims!
Never had a laptop in from the real world that had any of the games you benchmarked here. But lots of them get The Sims played on them.
JDG1980 - Saturday, June 1, 2013 - link
Agreed. The benchmark list is curiously disconnected from what these kind of systems are actually used to do in the real world. Seldom does anyone use a laptop of any kind to play "Triple-A" hardcore games. Usually it's stuff like The Sims and WoW. I think those should be included as benchmarks for integrated graphics, laptop chipsets, and low-end HTPC-focused graphics cards.tipoo - Saturday, June 1, 2013 - link
Because the Sims is much easier to run than most of these. Just because people tried running it on GMA graphics and wondered why it didn't work doesn't mean it's a demanding workload.jabber - Saturday, June 1, 2013 - link
Yes but the point is the games tested are pretty much pointless. How many here would bother to play them on such equipped laptops?Pretty much none.
But plenty 'normal' folks who would buy such equipment will play plenty of lesser games. In my job looking after 'normal' folks thats quite important when parents ask me about buying a laptop for their kid that wants to play a few games on it.
The world and sites such as Anandtech shouldnt just revolve around the whims of 'gamer dudes' especially as it appears the IT world is generally moving away from gamers.
It's a general computing world in future, rather than a enthusiast computing world like it was 10 years ago. I think some folks need to re-align their expectations going forward.
tipoo - Sunday, June 2, 2013 - link
I mean, if it can run something like Infinite or even Crysis 3 fairly well, you can assume it would run the Sims well.Quizzical - Saturday, June 1, 2013 - link
It would help immensely if you would say what you were comparing it to. As you are surely aware, a system that includes an A10-5800K but cripples it by leaving a memory channel vacant and running the other at 1333 MHz won't perform at all similarly to a properly built system with the same A10-5800K with two 4 GB modules of 1866 MHz DDR3 in properly matched channels.That should be an easy fix by adding a few sentences to page 5, but without it, the numbers don't mean much, as you're basically considering Intel graphics in isolation without a meaningful AMD comparison.
Quizzical - Saturday, June 1, 2013 - link
Ah, it looks like the memory clock speeds have been added. Thanks for that.HisDivineOrder - Saturday, June 1, 2013 - link
I see Razer making an Edge tablet with an Iris-based chip. In fact, it seems built for that idea more than anything else. That or a NUC HTPC run at 720p with no AA ever. You've got superior performance to any console out there right now and it's in a size smaller than an AppleTV.So yeah, the next Razer Edge should include this as an optional way to lower the cost of the whole system. I also think the next Surface Pro should use this. So high end x86-based laptops with Windows 8 Pro.
And NUC/BRIX systems that are so small they don't have room for discrete GPU's.
I imagine some thinner than makes sense ultrathins could also use this to great effect.
All that said, most systems people will be able to afford and use on a regular basis won't be using this chip. I think that's sad, but it's the way it will be until Intel stops trying to use Iris as a bonus for the high end users instead of trying to put discrete GPU's out of business by putting these on every chip they make so people start seeing it CAN do a decent job on its own within its specific limitations.
Right now, no one's going to see that, except those few fringe cases. Strictly speaking, while it might not have matched the 650m (or its successor), it did a decent job with the 640m and that's a lot better than any other IGP by Intel.
Spunjji - Tuesday, June 4, 2013 - link
You confused me here on these points:1) The NUC uses a 17W TDP chip and overheats. We're not going to have Iris in that form factor yet.
2) It would increase the cost of the Edge, not lower it. Same TDP problem too.
Otherwise I agree, this really needs to roll down lower in the food chain to have a serious impact. Hopefully they'll do that with Broadwell used by the GPU when the die area effectively becomes free thanks to the process switch.
whyso - Saturday, June 1, 2013 - link
So intel was right. Iris Pro pretty much matches a 650m at playable settings (30 fps +). Note that anandtech is being full of BullS**t here and comparing it to an OVERCLOCKED 650m from apple. Lets see, when intel made that 'equal to a 650m' claim it was talking about a standard 650m not an overclocked 650m running at 900/2500 (GDDR5) vs the normal 835/1000 (GDDR5 + boost at full, no boost = 735 mhz core). If you look at a standard clocked GDDR3 variant iris pro 5200 and the 650m are pretty much very similar (depending on the games) within around 10%. New Intel drivers should further shorten the gap (given that intel is quite good in compute).JarredWalton - Sunday, June 2, 2013 - link
http://www.anandtech.com/bench/Product/814For the games I tested, the rMBP15 isnt' that much faster in many titles. Iris isn't quite able to match GT 650M, but it's pretty close all things considered.
Spunjji - Tuesday, June 4, 2013 - link
I will believe this about new Intel drivers when I see them. I seriously, genuinely hope they surprise me, though.dbcoopernz - Saturday, June 1, 2013 - link
Are you going to test this system with madVR?Ryan Smith - Sunday, June 2, 2013 - link
We have Ganesh working to answer that question right now.dbcoopernz - Sunday, June 2, 2013 - link
Cool. :)JDG1980 - Saturday, June 1, 2013 - link
I would have liked to see some madVR tests. It seems to me that the particular architecture of this chip - lots of computing power, somewhat less memory bandwidth - would be very well suited to madVR's better processing options. It's been established that difficult features like Jinc scaling (the best quality) are limited by shader performance, not bandwidth.The price is far steeper than I would have expected, but once it inevitably drops a bit, I could see mini-ITX boards with this become a viable solution for high-end, passively-cooled HTPCs.
By the way, did they ever fix the 23.976 fps error that has been there since Clarkdale?
dbcoopernz - Saturday, June 1, 2013 - link
Missing Remote reports that 23.976 timing is much better.http://www.missingremote.com/review/intel-core-i7-...
8steve8 - Saturday, June 1, 2013 - link
Great work intel, and great review anand.As a fan of low power and small form factor high performance pcs, I'm excited about the 4770R.
my question is how do we get a system with 4770R ?
will it be in an NUC, if so, when/info?
will there be mini-itx motherboards with it soldered on?
bill5 - Saturday, June 1, 2013 - link
Anand, would you say the lack of major performance improvement due to crystalwell bodes ill for Xbox one?The idea is ESRAM could make the 1.2 TF Xbox One GPU "punch above it's weight" with more efficiency due to the 32MB of low latency cache (ALU's will stall less waiting on data). However these results dont really show that for Haswell (the compute results that scale perfectly with ALU's for example).
Here note I'm distinguishing between the cache as bandwidth saver, I think we can all agree it will serve that purpose- and as actual performance enhancer. I'm interested in the latter for Xbox One.
Kevin G - Saturday, June 1, 2013 - link
A couple of quotes and comments from the article:"If Crystalwell demand is lower than expected, Intel still has a lot of quad-core GT3 Haswell die that it can sell and vice versa."
Intel is handicapping demand for GT3e parts by not shipping them in socketed form. I'd love to upgrade my i7-2600k system to a 4770K+GT3e+TSX setup. Seriously Intel, ship that part and take my money.
"The Crystalwell enabled graphics driver can choose to keep certain things out of the eDRAM. The frame buffer isn’t stored in eDRAM for example."
WTF?!? The eDRAM would be the ideal place to store various frequently used buffers. Having 128 MB of memory leaves plenty of room for streaming in textures as need be. The only reason not to hold the full frame buffer is if Intel has an aggressive tile based rendering design and only a tile is stored there. I suspect that Intel's driver team will change this in the future.
"An Ultrabook SKU with Crystalwell would make a ton of sense, but given where Ultrabooks are headed (price-wise) I’m not sure Intel could get any takers."
I bet Apple would ship a GT3e based part in the MacBook Air form factor. They'd do something like lower the GPU clocks to prevent it from melting but they want it. It wouldn't surprise me if Apple managed to negotiate a custom part from Intel again.
Ultimatley I'm pleased with GT3e. On the desktop I can see the GPU being used for OpenCL tasks like physics while my Radeon 7970 handles the rest of the graphics load. Or for anything else, I'd like GT3e for the massive L4 cache.
tipoo - Saturday, June 1, 2013 - link
"Ultimatley I'm pleased with GT3e. On the desktop I can see the GPU being used for OpenCL tasks like physics while my Radeon 7970 handles the rest of the graphics load. Or for anything else, I'd like GT3e for the massive L4 cache."I'd love that to work, but what developer would include that functionality for that niche setup?
Kevin G - Saturday, June 1, 2013 - link
OpenCL is supposed to be flexible enough that you can mix execution targets. This also includes the possibility of OpenCL drivers for CPU's in addition to those that use GPU's. At the very least, it'd be nice for a game or application to manually select the OpenCL target in some config file.Egg - Saturday, June 1, 2013 - link
I'm only a noob high school junior, but aren't frame buffers tossed after display? What would be the point of storing a frame buffer? You don't reuse the data in it at all. As far as I know, frame buffer != unpacked textures.Also, aren't most modern fully programmable GPUs not tile based at all?
Also, wasn't it mentioned that K-series parts don't have TSX?
Kevin G - Saturday, June 1, 2013 - link
The z-buffer in particular is written and often read. Deferred rendering also blends multiple buffers together and at 128 MB in size, a deferred render can keep several in that memory. AA algorithms also perform read/writes on the buffer. At some point, I do see Intel moving the various buffers into the 128 MB of eDRAM as drivers mature. In fairness, this change may not be universal to all games and dependent on things like resolution.Then again, it could be a true cache for the GPU. This would mean that the drivers do not explicitly store the frame buffers there but can could be stored there based upon prefetching of data. Intel's caching hierarchy is a bit weird as the CPU's L3 cache can also be used as a L4 cache for the GPU under HD2000/2500/3000/4000 parts. Presumably the eDRAM would be a L5 cache under the Sandy Bridge/Ivy Bridge schema. The eDRAM has been described as a victim cache though for GPU operations it would make sense to prefetch large amounts of data (textures, buffers). It'd be nice to get some clarification on this with Haswell.
PowerVR is still tile based. Previous Intel integrated solutions were also tile base though they dropped that with the HD line (and I can't remember if the GMA line was tile based as well).
And you are correct that the K series don't have TSX, hence why I'd like a 4770K with GT3e and TSX. Also I forgot to throw in VT-d since that too is arbitrarily disabled in the K series.
IntelUser2000 - Sunday, June 2, 2013 - link
Kevin G: Intel dropped the Tile-based rendering in the GMA 3 series generation back in 2006. Although, their Tile rendering was different from PowerVR's.Egg - Sunday, June 2, 2013 - link
Fair points - I was being a bit myopic and only thought about buffers persisting across frames, neglecting the fact that buffers often need to be reused within the process of rendering a single frame! Can you explain how the CPU's L3 cache is an L4 cache for the GPU? Does the GPU have its own L3 cache already?Also I don't know whether PowerVR's architecture is considered fully programmable yet. I know they have OpenCL capabilities, but reading http://www.anandtech.com/show/6112/qualcomms-quadc... I'm getting a vague feeling that it isn't as complete as GCN or Kepler, feature wise.
IntelUser2000 - Tuesday, June 4, 2013 - link
Gen 7, the Ivy Bridge generation, has its own L3 cache. So you have the LLC(which is L3 for the CPU), and its own L3. Haswell is Gen 7.5.DanaGoyette - Saturday, June 1, 2013 - link
Any idea if this IGP supports 30-bit color and/or 120Hz displays?Currently, laptops like the HP EliteBook 8770w and Dell Precision M6700 haven't been able to use Optimus if you opt for such displays. It would be nice to see that question addressed...
DickGumshoe - Saturday, June 1, 2013 - link
I have been planning on getting a Haswell rMBP 15". I was holding out for Haswell namely due to the increased iGPU performance. My primary issue with the current Ivy Bridge rMBP is the lagginess with much of the UI, especially when there are multiple open windows.However, I'm a bit concerned about how the Haswell CPU's will compare with the current Ivy Bridge CPU's that Apple is currently shipping with the rMBP. The Haswell equivalent of the current rMBP Ivy Bridge CPU's do not have the Iris Pro, they only have the "slightly improved" HD 4600.
Obviously, we still need to wait until WWDC, but based on the released Haswell info, will Haswell only be a slight bump in performance for the 15" rMBP? If so, that is *very* disappointing news.
hfm - Saturday, June 1, 2013 - link
This is a huge win for Intel, definitely performance on par with a 650M. It's just as playable on nearly all those games at 1366x768. Even though the 650M pulls away at 1600X900, I wouldn't call either gpu playable in most of those games at that resolution.you look at it intelligently, this is a huge win by Intel. The 750M may save them, but if I was in the market for an Ultrabook to complement my gaming notebook, I would definitely go with iris pro. Hell, even if I didn't have a dedicated gaming notebook I would probably get iris Peru in my Ultrabook just for the power savings, it's not that much slower at playable resolution.
IntelUser2000 - Tuesday, June 4, 2013 - link
Iris Pro 5200 with eDRAM is only for the quad core standard notebook parts. The highest available for the Ultrabook is the 28W version, the regular Iris 5100. Preliminary results shows the Iris 5100 to be roughly on par with Desktop HD 4600.smilingcrow - Saturday, June 1, 2013 - link
For those commenting about pricing Intel has only released data for the high end Iris Pro enabled SKUs at this point and cheaper ones are due later.The high end chips are generally best avoided due to being poor value so stay tuned.
whyso - Saturday, June 1, 2013 - link
Yes, the rmbp is clearly using 90 watts on an 85 watt power adapter for the WHOLE SYSTEM!gxtoast - Sunday, June 2, 2013 - link
Question for Anand:I'm looking at getting a Haswell 15" Ultrabook with 16GB RAM and plenty of SSD to run up come fairly sophisticated Cisco, Microsoft and VMware cloud labs.
Is it likely that the Crystalwell cache could offset the lower performance specifications on the 4950HQ to make it as competitive, or more so, against the 4900MQ in this scenario?
It would also be good to understand the performance improvement, for non-game video tasks, the HQ part might have over the 4900MQ on a FHD panel. If the advantage isn't there, then, unless the Crystalwell makes a big difference, the 4900MQ part is likely the one to get.
Cheers
piesquared - Sunday, June 2, 2013 - link
Question. Why in Kabini reviews did we get the standard "just wait til intel releases their next gen parts to see the real competion OMGBBSAUCE!!" marketing spiel, while not a mention that hsw's competition is Kaveri?IntelUser2000 - Sunday, June 2, 2013 - link
Uhh, because Haswell launch was less than a month away from Kabini, while Kaveri is 6+ months away from Haswell?AMD paper launched Kabini and Richland in March, and products are coming now. Kaveri claims to be late Q4 for Desktop and early Q1 next year for mobile. If they do the same thing, that means Feb-March for Desktop Kaveri and April/May for Mobile. Yeah.... perhaps you should think about that.
JarredWalton - Sunday, June 2, 2013 - link
The Kabini article never said, "just wait and see what Intel has coming!" so much as it said, "We need to see the actual notebooks to see how this plays out, and with Intel's Celeron and Pentium ULV parts are already at Kabini's expected price point, it's a tough row to hoe." Kabini is great as an ARM or Atom competitor; it's not quite so awesome compared to Core i3, unless the OEMs pass the price savings along in some meaningful way. I'd take Kabini with a better display over Core i3 ULV, but I'll be shocked if we actually see a major OEM do Kabini with a quality 1080p panel for under $500.arjunp2085 - Sunday, June 2, 2013 - link
I was under the impression that Richland has been selling on newegg as per a comment on an earlier article..I was also wondering since you had done a review on Richland from MSI notebook review i was wondering if you would do a similar comparison..
http://www.anandtech.com/show/6949/msi-gx70-3be-ri...
It would be appreciated just placing all the possible matches on the table and a paragraph with selection criteria for the review making the choices dispelling opinion of missing any models
GameHopper - Sunday, June 2, 2013 - link
Why no real power measurements? If it's so important to iris Pro, real world power numbers will be more useful than just listing TDP of the partsshinkueagle - Sunday, June 2, 2013 - link
The GIANT has awoken! Performance-wise, its amazing! Destroys Trinity! Price-wise.... Well, the area needs some work...trip1ex - Sunday, June 2, 2013 - link
Yes really disappointed there is no socketed cpu solution that have the best igpu config.But I suppose I already have Ivy Bridge i5 for my WMC pc and it is good enough. Still be a nice cheap way to build a secondary small desktop that could also do some light gaming.
Lataa - Sunday, June 2, 2013 - link
dikicha23@gmail.comvFunct - Sunday, June 2, 2013 - link
Curious why Intel just doesn't go straight for the jugular and release a discrete GPU part on their 22nm process. NVidia/AMD is stuck at 28mm because of their foundries, and it appears Intel's GPU architecture is feature complete and therefore competitive with the discrete parts if they scaled up everything by 4x or 5x.NVidia & AMD should be worried about their core high-profit-margins business!
jamescox - Sunday, June 2, 2013 - link
The photo you have on page 4 showing the 2 separate die is strange. The haswell die should not be square. Other photos I have seen show the expected (extremely rectangular) haswell die and a tiny ram chip. I would expect a haswell based chip with double the cpu (8 real cores), and no gpu eventually; this would be almost square. Do you know why your chip does not match other multi-chip module photos online?jamescox - Tuesday, June 4, 2013 - link
I guess the other photos are haswell plus an integrated chipset in the same module. The photo of the two die is still strange, as neither of these look like a haswell die.IntelUser2000 - Tuesday, June 4, 2013 - link
That's because that's the picture for GT3e Iris Pro 5200 graphics. The bigger square die is the Haswell CPU+GT3 GPU, while the smaller one is the on-package DRAM.The dual core with on-package chipset is even longer than the regular Haswell.
tipoo - Wednesday, January 21, 2015 - link
Yes it should, you're thinking of the ultrabook chips with a controller to the side, not eDRAM. Those ones are rectangular. Look at a haswell MBP 15" teardown to verify.TheJian - Sunday, June 2, 2013 - link
This is useless at anything above 1366x768 for games (and even that is questionable as I don't think you were posting minimum fps here). It will also be facing richland shortly not AMD's aging trinity. And the claims of catching a 650M...ROFL. Whatever Intel. I wouldn't touch a device today with less than 1600x900 and want to be able to output it to at least a 1080p when in house (if not higher, 22in or 24in). Discrete is here to stay clearly. I have an Dell i9300 (Geforce 6800) from ~2005 that is more potent and runs 1600x900 stuff fine, I think it has 256MB of memory. My dad has an i9200 (radeon 9700pro with 128mb I think) that this IRIS would have trouble with. Intel has a ways to go before they can claim to take out even the low-end discrete cards. You are NOT going to game on this crap and enjoy it never mind trying to use HDMI/DVI out to a higher res monitor at home. Good for perhaps the NICHE road warrior market, not much more.But hey, at least it plays quite a bit of the GOG games catalog now...LOL. Icewind Dale and Baldur's gate should run fine :)
wizfactor - Sunday, June 2, 2013 - link
Shimpi's guess as to what will go into the 15-inch rMBP is interesting, but I have a gut feeling that it will not be the case. Despite the huge gains that Iris Pro has over the existing HD 4000, it is still a step back from last year's GT 650M. I doubt Apple will be able to convince its customers to spend $2199 on a computer that has less graphics performance than last year's (now discounted) model. Despite its visual similarity to an Air, the rMBP still has performance as a priority, so my guess is that Apple will stick to discrete for the time-being.That being said, I think Iris Pro opens up a huge opportunity to the 15-inch rMBP lineup, mainly a lower entry model that finally undercuts the $2000 barrier. In other words, while the $2199 price point may be too high to switch entirely to iGPU, Apple might be able to pull it off at $1799. Want a 15-inch Retina Display? Here's a more affordable model with decent performance. Want a discrete GPU? You can get that with the existing $2199 price point.
As far as the 13-inch version is concerned, my guesses are rather murky. I would agree with the others that a quad-core Haswell with Iris Pro is the best-case scenario for the 13-inch model, but it might be too high an expectation for Apple engineers to live up to. I think Apple's minimum target with the 13-inch rMBP should be dual-core Haswell with Iris 5100. This way, Apple can stick to a lower TDP via dual-core, and while Iris isn't as strong as Iris Pro, its gain over HD 4000 is enough to justify the upgrade. Of course, there's always the chance that Apple has temporary exclusivity on an unannounced dual-core Haswell with Iris Pro, the same way it had exclusivity with ULV Core 2 Duo years ago with MBA, but I prefer not to make Haswell models out of thin air.
BSMonitor - Monday, June 3, 2013 - link
You are assuming that the next MBP will have the same chasis size. If thin is in, the dGPU-less Iris Pro is EXTREMELY attractive for heat/power considerations..More likely is the end of the thicker MBP and separate thin MBAir lines. Almost certainly, starting in two weeks we have just one line, MBP all with retina, all the thickness of MBAir. 11" up to 15"..
TheJian - Sunday, June 2, 2013 - link
As far as encoding goes, why do you guys ignore cuda?http://www.extremetech.com/computing/128681-the-wr...
Extremetech's last comment:
"Avoid MediaEspresso entirely."
So the one you pick is the worst of the bunch to show GPU power....jeez. You guys clearly have a CS6 suite lic so why not run Adobe Premiere which uses Cuda and run it vs the same vid render you use in Sony's Vegas? Surely you can rip the same vid in both to find out why you'd seek a CUDA enabled app to rip with. Handbrake looks like they're working on supporting Cuda also shortly. Or heck, try FREEMAKE (yes free with CUDA). Anything besides ignoring CUDA and acting like this is what a user would get at home. If I owned an NV card (and I don't in my desktop) I'd seek cuda for everything I did that I could find. Freemake just put out another update 5/29 a few days ago.
http://www.tested.com/tech/windows/1574-handbrake-...
2.5yrs ago it was equal, my guess is they've improved Cuda use by now. You've gotta love Adam and Jamie... :) Glad they branched out past just the Mythbusters show.
xrror - Sunday, June 2, 2013 - link
I have a bad suspicion one of the reasons why you won't see a desktop Haswell part with eDRAM is that it would pretty much euthanize socket 2011 on the spot.IF Intel does actually release a "K" part with it enabled, I wonder how restrictive or flexible the frequency ratios on the eDRAM will be?
Speaking of socket 2011, I wonder if/when Intel will ever refresh it from Sandy-E?
wizfactor - Sunday, June 2, 2013 - link
I wouldn't call myself an expert on computer hardware, but isn't it possible that Iris Pro's bottleneck at 1600x900 resolutions could be attributed to insufficient video memory? Sure, that eDRAM is a screamer as far as latency is concerned, but if the game is running on higher resolutions and utilising HD textures, that 128MB would fill up really quickly, and the chip would be forced to swap often. Better to not have to keep loading and unloading stuff in memory, right?Others note the similarity between Crystalwell and the Xbox One's 32MB Cache, but let's not forget that the Xbox One has its own video memory; Iris Pro does not, or put another way, it's only got 128 MB of it. In a time where PC games demand at least 512 MB of video RAM or more, shouldn't the bottleneck that would affect Iris Pro be obvious? 128 MB of RAM is sure as hell a lot more than 0, but if games demand at least four times as much memory, then wouldn't Iris Pro be forced to use regular RAM to compensate, still? This sounds to me like what's causing Iris Pro to choke at higher resolutions.
If I am at least right about Crystalwell, it is still very impressive that Iris Pro was able to get in reach of the GT 650M with so little memory to work with. It could also explain why Iris Pro does so much better in Crysis: Warhead, where the minimum requirements are more lenient with video memory (256 MB minimum). If I am wrong, however, somebody please correct me, and I would love to have more discussion on this matter.
BSMonitor - Monday, June 3, 2013 - link
Me thinks thou not know what thou talking about ;)F_A - Monday, June 3, 2013 - link
The video memory is stored in main memory being it 4GB and above...(so minspecs of crysis are clearly met)... the point is bandwidtht.The article is telling there are roughly 50GB/s when the cachè is run with 1.6 Ghz.
So ramping it up in füture makes the new Iris 5300 i suppose.
glugglug - Tuesday, June 4, 2013 - link
Video cards may have 512MB to 1GB of video memory for marketing purposes, but you would be hard pressed to find a single game title that makes use of more than 128.tipoo - Wednesday, January 21, 2015 - link
Uhh, what? Games can use far more than that, seeing them push past 2GB is common. But what matters is how much of that memory needs high bandwidth, and that's where 128MB of cache can be a good enough solution for most games.boe - Monday, June 3, 2013 - link
As soon as intel CPUs have video performance that exceeds NVidia and AMD flagship video cards I'll get excited. Until then I think of them as something to be disabled on workstations and to be tolerated on laptops that don't have better GPUs on board.MySchizoBuddy - Monday, June 3, 2013 - link
So Intel just took the OpenCL crown. Never thought this day would come.prophet001 - Monday, June 3, 2013 - link
I have no idea whether or not any of this article is factually accurate.However, the first page was a treat to read. Very well written.
:)
Teemo2013 - Monday, June 3, 2013 - link
Great success by Intel.4600 is near GT630 and HD4650 (much better than 6450 which sells for $15 at newegg)
5200 is better than GT640 and HD 6670 (currently sells like $50 at newegg)
Intel's intergrated used to be worthless comparing with discret cards. It slowly catches up during the past 3 years, and now 5200 is beating a $50 card. Can't wait for next year!
Hopefully this will finally push AMD and Nvidia to come up with meaningful upgrade to their low level product lines.
Cloakstar - Monday, June 3, 2013 - link
A quick check for my own sanity:Did you configure the A10-5800K with 4 sticks of RAM in bank+channel interleave mode, or did you leave it memory bandwidth starved with 2 sticks or locked in bank interleave mode?
The numbers look about right for 2 sticks, and if that is the case, it would leave Trinity at about 60% of its actual graphics performance.
I find it hard to believe that the 5800K is about a quarter the performace per watt of the 4950HQ in graphics, even with the massive, server-crushing cache.
andrerocha - Monday, June 3, 2013 - link
is this new cpu faster than the 4770k? it sure cost more?zodiacfml - Monday, June 3, 2013 - link
impressive but one has to take advantage of the compute/quick sync performance to justify the increase in price over the HD 4600ickibar1234 - Tuesday, June 4, 2013 - link
Well, my Asus G50VT laptop is officially obsolete! A Nvidia 512MB GDDR3 9800gs is completely pwned by this integrated GPU, and, the CPU is about 50-65% faster clock for clock to the last generation Core 2 Duo Penryn chips. Sure, my X9100 can overclock stably to 3.5GHZ but this one can get close even if all cores are fully taxed.Can't wait to see what the Broadwell die shrink brings, maybe a 6-core with Iris or a higher clocked 4-core?
I too see that dual core versions of mobile Haswell with this integrated GPU would be beneficial. Could go into small 4.5 pounds laptops.
AMD.....WTH are you going to do.
zodiacfml - Tuesday, June 4, 2013 - link
AMD has to create a Crystalwell of their own. I never thought Intel could beat them to it since their integrated GPUs always has needed bandwidth ever since.Spunjji - Tuesday, June 4, 2013 - link
They also need to find a way past their manufacturing process disadvantage, which may not be possible at all. We're comparing 22nm Apples to 32/28nm Pears here; it's a relevant comparison because those are the realities of the marketplace, but it's worth bearing in mind when comparing architecture efficiencies.Death666Angel - Tuesday, June 4, 2013 - link
"What Intel hopes however is that the power savings by going to a single 47W part will win over OEMs in the long run, after all, we are talking about notebooks here."This plus simpler board designs and fewer voltage regulators and less space used.
And I agree, I want this in a K-SKU.
Death666Angel - Tuesday, June 4, 2013 - link
And doesn't MacOS support Optimus?RE: "In our 15-inch MacBook Pro with Retina Display review we found that simply having the discrete GPU enabled could reduce web browsing battery life by ~25%."
GullLars - Tuesday, June 4, 2013 - link
Those are strong words in the end, but i agree Intel should make a K-series CPU with Crystalwell. What comes to mind is they may be doing that for Broadwell.The Iris Pro solution with eDRAM looks like a nice fit for what i want in my notebook upgrade coming this fall. I've been getting by on a Core2Duo laptop, and didn't go for Ivy Bridge because there were no good models with a 1920x1200 or 1920x1080 display without dedicated graphics. For a system that will not be used for gaming at all, but needs resolution for productivity, it wasn't worth it. I hope this will change with Haswell, and that i will be able to get a 15" laptop with >= 1200p without dedicated graphics. 4950HQ or 4850HQ seems like an ideal fit. I don't mind spending $1500-2000 for a high quality laptop :)
IntelUser2000 - Tuesday, June 4, 2013 - link
ANAND!!You got the FLOPs rating wrong on the Sandy Bridge parts. They are at 1/2 of Ivy Bridge.
1350MHz with 12 EUs and 8 FLOPs/EU will result in 129.6GFlops. While its true in very limited scenarios Sandy Bridge's iGPU can co-issue, its small enough to be non-existent. That is why a 6EU HD 2500 comes close to 12EU HD 3000.
Hrel - Tuesday, June 4, 2013 - link
If they use only the HD4600 and Iris Pro that'd probably be better. As long as it's clearly labeled on laptops. HD 4600 Pro (don't expect to do any video work on this) Iris Pro (it's passable in a pinch).But I don't think that's what's going to happen. Iris Pro could be great for Ultrabooks; I don't really see any use outside of that though. A low end GT740M is still a better option in any laptop that has the thermal room for it. Considering you can put those in 14" or larger ultrabooks I still think Intel's graphics aren't serious. Then you consider the lack of Compute, PhysX, Driver optimization, game specific tuning...
Good to see a hefty performance improvement. Still not good enough though. Also pretty upsetting to see how many graphics SKU's they've released. OEM'S are gonna screw people who don't know just to get the price down.
Hrel - Tuesday, June 4, 2013 - link
The SKU price is 500 DOLLARS!!!! They're charging you 200 bucks for a pretty shitty GPU. Intel's greed is so disgusting it over rides the engineering prowess of their employees. Truly disgusting Intel; to charge that much for that level of performance. AMD we need you!!!!xdesire - Tuesday, June 4, 2013 - link
May i ask a noob question? Question: Do we have no i5s, i7s WITHOUT on board graphics any more? As a gamer i'd prefer to have a CPU + discrete GPU in my gaming machine and i don't like to have extra stuff stuck on the CPU, lying there consuming power and having no use (for my part) whatsoever. No ivy bridge or haswell i5s, i7s without iGPU or whatever you call it?flyingpants1 - Friday, June 7, 2013 - link
They don't consume power while they're not in use.Hrel - Tuesday, June 4, 2013 - link
WHY THE HELL ARE THOSE SO EXPENSIVE!!!!! Holy SHIT! 500 dollars for a 4850HQ? They're charging you 200 dollars for a shitty GPU with no dedicated RAM at all! Just a cache! WTFF!!!Intel's greed is truly disgusting... even in the face of their engineering prowess.
MartenKL - Wednesday, June 5, 2013 - link
What I don't understand is why Intel didn't do a "next-gen console like processor". Like takeing the 4770R and doubling the GPU or een quadrupling, wasn't there space? The thermal headroom must have been there as we are used to CPUs with as high as 130W TDP. Anyhow, combining that with awesome drivers for Linux would have been a real competition to AMD/PS4/XONE for Valve/Steam. A complete system under 150w capable of awesome 1080p60 gaming.So now I am looking for the best performing GPU under 75W, ie no external power. Which is it, still the Radeon HD7750?
Phrontis - Wednesday, June 5, 2013 - link
I can't wait for one on these on a mITX board such with 3 decent monitor outputs. Theres enough power for the sort of things I do if not for gaming.Phrontis
khanov - Friday, June 7, 2013 - link
Without a direct comparison between HD 5000/5100 and Iris Pro 5200 with Crystalwell,how can we conclude that Crystalwell has any effect in any of the game benchmarks? While it clearly is of benefit in some compute tasks, in the game benchmarks you only compare to HD 4600 with half as many EU's and to Nvidia and AMD with their different architectures.
We really need to see Iris Pro 5200 vs HD5100 to get an apples to apples comparison and be able to determine if Crystalwell is worth the extra money.
MODEL3 - Sunday, June 9, 2013 - link
Haswell ULT GT3 (Dual-Core+GT3) = 181mm2 and 40 EU Haswell GPU is 174mm^2.7mm^2 for everything else except GT3?
n13L5 - Tuesday, June 11, 2013 - link
" An Ultrabook SKU with Crystalwell would make a ton of sense, but given where Ultrabooks are headed (price-wise) I’m not sure Intel could get any takers."They sure seem to be going up in price, rather than down at the moment...
anandfan86 - Tuesday, June 18, 2013 - link
Intel has once again made their naming so confusing that even their own marketing weasels can't get it right. Notice that the Intel slide titled "4th Gen Intel Core Processors H-Processors Line" calls the graphics in the i7-4950HQ and i7-4850HQ "Intel HD Graphics 5200" instead of the correct name which is "Intel Iris Pro Graphics 5200". This slide calls the graphics in the i7-4750HQ "Intel Iris Pro Graphics 5200" which indicates that the slide was made after the creation of that name. It is little wonder that most media outlets are acting as if the biggest tech news of the month is the new pastel color scheme in iOS 7.Myoozak - Wednesday, June 26, 2013 - link
The peak theoretical GPU performance calculations shown are wrong for Intel's GFLOPS numbers. Correct numbers are half of what is shown. The reason is that Intel's execution units are made of of an integer vec4 processor and a floating-point vec4 processor. This article correctly states it has a 2xvec4 SIMD, but does not point out that half is integer and half is floating-point. For a GFLOPS computation, one should only include the floating-point operations, which means only half of that execution unit's silicon is getting used. The reported computation performance would only be correct if you had an algorithm with a perfect mix of integer & float math that could be co-issued. To compare apples to apples, you need to stick to GFLOPS numbers, and divide all the Intel numbers in the table by 2. For example, peak FP ops on the Intel HD4000 would be 8, not 16. Compared this way, Intel is not stomping all over AMD & nVidia for compute performance, but it does appear they are catching up.alexcyn - Tuesday, August 6, 2013 - link
I heard that Intel 22nm process equals TSMS 26nm, so the difference is not that much.alexcyn - Tuesday, August 6, 2013 - link
I heard that Intel 22nm process equals TSMC 26nm, so the difference is not that big.Doughboy(^_^) - Friday, August 9, 2013 - link
I think Intel could push their yield way up by offering 32MB and 64MB versions of Crystalwell for i3 and i5 processors. They could charge the same markup for the 128, but sell the 32/64 for cheaper. It would cost Intel less and probably let them take even further market share from low-end dGPUs.krr711 - Monday, February 10, 2014 - link
It is funny how a non-PC company changed the course of Intel forever for the good. I hope that Intel is wise enough to use this to spring-board the PC industry to a new, grand future. No more tick-tock nonsense arranged around sucking as many dollars out of the customer as possible, but give the world the processing power it craves and needs to solve the problems of tomorrow. Let this be your heritage and your profits will grow to unforeseen heights. Surprise us!s2z.domain@gmail.com - Friday, February 21, 2014 - link
I wonder where this is going. Yes the multi core and cache on hand and graphics may be goody, ta.But human interaction in actual products?
I weigh in at 46kg but think nothing of running with a Bergen/burden of 20kg so a big heavy laptop with ingratiated 10hr battery and 18.3" would be efficacious.
What is all this current affinity with small screens?
I could barely discern the vignette of the feathers of a water fowl at no more than 130m yesterday, morning run in the Clyde Valley woodlands.
For the "laptop", > 17" screen, desktop 2*27", all discernible pixels, every one of them to be a prisoner. 4 core or 8 core and I bore the poor little devils with my incompetence with DSP and the Julia language. And spice etc.
P.S. Can still average 11mph @ 50+ years of age. Some things one does wish to change. And thanks to the Jackdaws yesterday morning whilst I was fertilizing a Douglas Fir, took the boredom out of a another wise perilous predicament.
johncaldwell - Wednesday, March 26, 2014 - link
Hello,Look, 99% of all the comments here are out of my league. Could you answer a question for me please? I use an open source 3d computer animation and modeling program called Blender3d. The users of this program say that the GTX 650 is the best GPU for this program, siting that it works best for calculating cpu intensive tasks such as rendering with HDR and fluids and other particle effects, and they say that other cards that work great for gaming and video fall short for that program. Could you tell me how this Intel Iris Pro would do in a case such as this? Would your test made here be relevant to this case?
jadhav333 - Friday, July 11, 2014 - link
Same here johncaldwell. I would like to know the same.I am a Blender 3d user and work on cycles render which also uses the GPU to process its renders. I am planning to invest in a new workstation.. either a custome built hardware for a linux box or the latest Macbook Pro from Apple. In case of latter, how useful will it be, in terms of performance for GPU rendering on Blender.
Anyone care to comment on this, please.
HunkoAmazio - Monday, May 26, 2014 - link
Wow I cant believe I understood this, My computer archieture class paid off... except I got lost when they were talking about n1 n2 nodes.... that must have been a post 2005 feature in CPU N bridge S Bridge TechnologysystemBuilder - Tuesday, August 5, 2014 - link
I don't think you understand the difference between DRAM circuitry and arithmetic circuitry. A DRAM foundry process is tuned for high capacitance so that the memory lasts longer before refresh. High capacitance is DEATH to high-speed circuitry for arithmetic execution, that circuitry is tuned for very low capacitance, ergo, tuned for speed. By using DRAM instead of SRAM (which could have been built on-chip with low-capacitance foundry processes), Intel enlarged the cache by 4x+, since an SRAM cell is about 4x+ larger than a DRAM cell.Fingalad - Friday, September 12, 2014 - link
CHEAP SLI! They should make a cheap IRIS pro graphics card and do a new board where you can add that board for SLI.P39Airacobra - Thursday, January 8, 2015 - link
Not a bad GPU at all, On a small laptop screen you can game just fine, But it should be paired with a lower CPU, And the i3, i5, i7 should have Nvidia or AMD solutions.