Comments Locked

120 Comments

Back to Article

  • jabber - Thursday, February 5, 2015 - link

    Crikey...I'm still on 16GB of DDR2 ECC! I feel old.
  • foxtrot1_1 - Thursday, February 5, 2015 - link

    If you're still running DDR2 it's probably not the RAM that's holding your system back.
  • nathanddrews - Friday, February 6, 2015 - link

    Clearly. It's rather sad to see how little impact RAM has on performance... which begs the question of who is buying this stuff? Is the only strength stability during overclocking?
  • III-V - Friday, February 6, 2015 - link

    Well Haswell-E users are tied to DDR4, so yeah there's that :)

    Of course, that's not what you were talking about. Memory bandwidth can have a big effect on certain workloads. IGPs need a bit of it (tapers off hard after 2133 MHz), but I know programs like WinZip and 7-Zip love memory bandwidth. There's certainly a lot of server and HPC workloads that love it too, but for most users, you're certainly right -- it's not worth it at the moment and may not really ever be a concerning bottleneck.
  • r3loaded - Thursday, February 5, 2015 - link

    You mean to say you've not bought a single computer since Core 2? Damn!
  • Murloc - Thursday, February 5, 2015 - link

    well do you really need additional CPU power?

    My overclocked E8500 (with stock cooler) was a beast, there was so much headroom, and I didn't change computer because of it.

    Right now I'm on a i5 750 from 2009 or something and it's totally fine. Also my GTX 275 still handles games in full hd just fine although not at max settings and it also becomes hot and only has DX10 so it's obsolete.
    So after 6 years, it's only the GPU that could use upgrading, the CPU/RAM part is not bottlenecking anything.

    Well not having sata 6 and that limiting my SSD is the one bad thing. I don't have any USB3 pendrives so I don't miss that.
    It's technology and power consumption making my CPU/chipset obsolete rather than performance.
  • Guspaz - Thursday, February 5, 2015 - link

    I'm still running a first-gen i7 (Nehalem) as my work computer, and it's still plenty snappy. I've got 12GB of RAM in the thing, and whatever I do have in the way of performance limitations would largely be resolved by sticking an SSD in there.
  • svan1971 - Sunday, February 8, 2015 - link

    Get the PX-AG256M6e say goodbye to sata 3 limitations. I put one in an old x58 board and its amazing what a 6 year old 3.6 oc'd i7 can do.
  • mikato - Monday, February 9, 2015 - link

    Nice post. I had an E7300 system and I had already upgraded the GPU to a GTX 760 and maxed out memory. It was somewhat slow in the newer games I played (Call of Duty), then I bought an E8500 on ebay and put that in and overclocked it finding a sweet spot, but it was still not quite as fast as I wanted. The poor optimization of COD Ghosts was partly to blame, but I ended up redoing the whole system at that point.

    I do use an i7-950 Bloomfield at work still and it does just fine.
  • jabber - Thursday, February 5, 2015 - link

    Just in clarify, I'm running a dual quad core 3.33Ghz Xeon setup. Still keeps up with a i7 in a lot of cases. They cost peanuts too.
  • ddriver - Thursday, February 5, 2015 - link

    Upgrading became a non-issue around sandy bridge. My system is 3+ years old, and still within 10% of the corresponding tier of CPU today. Might as well be my last x86/Windows system before I switch to an ARM cluster under Linux...
  • mdav9609 - Sunday, February 14, 2016 - link

    Awesome! I've got an intel server board running two quad core Xeon E5620's (or something don't remember the exact numbers right now, socket 775) and their performance is almost as good as an i7 2600k, at least according to Passmark. I'm running them with an EVGA GTX 580. Got no problem running Fallout 4 and The Witcher 3 on it in 1080p. It's not my primary machine but I got one of these systems from work for free and put the second Xeon in it. Got it off eBay for like 15 bucks. Put in a few 15K SAS drives in RAID 0 and it is pretty cool. I like maxing out older systems just for the hell of it.
  • pandemonium - Friday, February 6, 2015 - link

    I thought it was pretty clear in this, and many, many, previous test comparisons of speed and DDR versions, that it makes very little difference. I'm on 8GB DDR2 and it's still going strong for everything I use it for. If it works...
  • FlushedBubblyJock - Sunday, February 15, 2015 - link

    Bingo - poor guy had to go through all that just so kingpins can win prizes flying around the world on enduser dimes.
    To the sane electorate, memory means number of GB.

    I have to add I know plenty who, so long as the number is higher, they really and truly believe there is a performance increase. Sometimes they get confused, mixing generations of cpu's and memory, then their big brag on their junk doesn't work so well, but they still believe it.
    So the memory marketing works, because there are an awful lot of people out there who fit the above description.
  • phoenix_rizzen - Thursday, February 5, 2015 - link

    You're not alone.

    I have an HTPC running in the bedroom at home with an Athlon64 and 1.5 GB of DDR1. Plenty of horsepower for Windows XP, Google Chrome, and Plex web client, as it's connected to a 27" CRT TV.

    One of my desktops at work is a tri-core Athlon-II system with 4 GB of DDR2 (AM2 motherboard)

    My other work desktop is slightly more advanced, running a tri-core Athlon-II system with 8 GB of DDR3 (AM3 motherboard).

    And the home server is just slightly more advanced still, running a quad-core Phenom-II system with 8 GB of DDR3.
  • nwrigley - Thursday, February 5, 2015 - link

    I'm still running a quad Q6600 @ 3ghz with 8GB of DDR2. I've upgraded to an SSD and newer graphics card over its life. While money is certainly a limiting factor, in some ways there hasn't been a compelling reason to upgrade to a new machine.

    I work in video production and use high-end Macs at work. I often don't feel a difference between work and at home, with the exception of when the Mac doesn't have an SSD installed - then my system feels much faster (my boss isn't the type to upgrade an existing system, he'll just order a new one - very frustrating when a $200 SSD upgrade would make a huge difference).

    I'm surprised just how well this processor has stood the test of time, but we haven't seen the type of performance jump that happened after the Pentium 4 era. The big performance jump we did see was with SSDs, so that's where I put my money (along with a bigger/better monitor.) My computer has also been a quiet and reliable workhorse - you never know what problems may come with a new system.
  • Murloc - Thursday, February 5, 2015 - link

    I wouldn't feel compelled to change such a system either except for the sata/USB speeds, IF your use case can obtain advantages from faster speeds in that sector of course.
  • nwrigley - Thursday, February 5, 2015 - link

    Yep, you're absolutely right. A current motherboard would make both my SSD and GPU run faster with increased SATA and PCI Express speeds. USB 3.0 would be nice, but I don't have a current need for it.
  • Guspaz - Thursday, February 5, 2015 - link

    PCIe speeds in a Core 2 era system would still outstrip SATA on a modern system, though. Slap in an SSD using an x4 interface, for example, and you're talking 1GB/s of full duplex bandwidth even with PCIe 1.0, while modern SATA is still only doing around 600MB/s.

    Do you have any free PCIe slots that are more than 1x? Those could directly power a PCIe SSD, or you could stick in a SATA3 controller and use a SATA3 SSD. Ditto for USB3, if you did need it.
  • nwrigley - Thursday, February 5, 2015 - link

    That's a good thought, unfortunately I only have PCIe 1x slots. Doesn't look like that would prGA-P35-DS3Rovide any benefit.
  • jabber - Friday, February 6, 2015 - link

    Well I've added into my T5400 workstation USB3.0, eSATA, 7870 GPU, SSHD and SSD. I haven't added SATA III as its way too costly for a decent card, plus even though I can only push 260MBps from a SSD, with 0.1ms access times I really can't notice in real world. The main chunk of the machine only cost around £200 to put together.
  • Striker579 - Friday, February 6, 2015 - link

    omg those retro color mb's....good times
  • Wardrop - Saturday, February 7, 2015 - link

    Wow, how did you accidentally insert your motherboard model in the middle of the word "provide"? Quite an impressive typo, lol
  • msroadkill612 - Saturday, September 2, 2017 - link

    To be the devils advocate, many say there are few downside for most using 8 lane gpu vs 16 lanes for gpu.

    if nvme an ssd means reducing to 8 lanes for gpu to free some lanes, I would be tempted.
  • FlushedBubblyJock - Sunday, February 15, 2015 - link

    Core 2 is getting weak - right click and open ttask manager then see how often your quad is maxxed at 100% useage (you can minimize and check the green rectangle by the clock for percent used).

    That's how to check it - if it's hammered it's time to sell it and move up. You might be quite surprised what a large jump it is to Sandy Bridge.
  • blanarahul - Thursday, February 5, 2015 - link

    TOTALLY OFF TOPIC but this is how Samsung's current SSD lineup should be:

    850: 120 GB, 250 GB TLC with TurboWrite

    850 Pro: 128 GB, 256 GB MLC

    850 EVO: 500/512 GB, 1000/1024 GB TLC w/o TurboWrite

    Because:
    a) 500 GB and 1000 GB 850 EVOs don't get any speed benefit from TurboWrite.
    b) 512/1024 GB PRO has only 10 MB/s sequential read, 2K IOPS and 12/24 GB capacity advantage over 500/1000 GB EVO. Sequential write speed, advertised endurance, random write speed, features etc. are identical between them.
    c) Remove TurboWrite from 850 EVO and you get a capacity boost because you are no longer running TLC NAND in SLC mode.
  • Cygni - Thursday, February 5, 2015 - link

    Considering what little performance impact these memory standards have had lately, DDR2 is essentially just as useful and relevant as the latest stuff... with the added of advantage of the fact that you already own it.
  • FlushedBubblyJock - Sunday, February 15, 2015 - link

    If you screw around long enough on core 2 boards with slight and various cpu OC's with differing FSB's and result memory divisors and timings with mechanical drives present, you can sometimes produce and enormous performance increase and reduce boot times massively - the key seems to have been a differing sound in the speedy access of the mechanical hard drive - though it offten coincided with memory access time but not always.
    I assumed and still do assume it is an anomaly in the exchanges on the various buses where cpu, ram, harddrive, and the north and south bridges timings just happen to all jibe together - so no subsystem is delayed waiting for some other overlap to "re-access".

    I've had it happen dozens of times on many differing systems but never could figure out any formula and it was always just luck goofing with cpu and memory speed in the bios.
    I'm not certain if it works with ssd's on core 2's (socket 775 let's say) - though I assume it very well could but the hard drive access sound would no longer be a clue.
  • retrospooty - Thursday, February 5, 2015 - link

    I love reviews like this... I will link it and keep it for every time some newb doof insists that high bandwidth RAM is important. We saw almost no improvement going from DDR400 cas2 to DDR3-1600 CAS10 now the same to DDR4 3000+ CAS freegin 80 LOL
  • menting - Thursday, February 5, 2015 - link

    depends on usage. for applications that require high total bandwidth, new generations of memory will be better, but for applications that require short latency, there won't be much improvement due to physical restraints of light speed
  • dgingeri - Thursday, February 5, 2015 - link

    Really, what applications use this bandwidth now?

    I'm the admin of a server software test lab, and we've been forced to move to the Xeon E5 v3 platform for some of our software, and it isn't seeing any enhancement from DDR4 either. These are machines and software using 256GB of memory at a time. The steps from Xeon E5 and DDR3 1066 to E5 v2 and DDR3 1333 and then up to the E5 v3 and DDR4 2133 are showing no value whatsoever. We have a couple aspects with data dedup and throughput are processor intensive, and require a lot of memory, but the memory bandwidth doesn't show any enhancement. However, since Dell is EOLing their R720, under Intel's recommendation, we're stuck moving up to the new platform. So, it's driving up our costs with no increase in performance.

    I would think that if anything would use memory bandwidth, it would be data dedup or storage software. What other apps would see any help from this?
  • Mr Perfect - Thursday, February 5, 2015 - link

    Have you seen the reported reduction in power consumption? With 256GBs per machine, it sounds like you should be benefiting from the lower power draw(and lower cooling costs) of DDR4.
  • Murloc - Thursday, February 5, 2015 - link

    depending on the country and its energy prices, the expense to upgrade and the efficiency gains made, you may not even be able to recoup the costs, ever.
    From a green point of view it may be even worse due to embodied energy going to waste depending on what happens to the old server.
  • Mr Perfect - Friday, February 6, 2015 - link

    True, but if you have to buy DDR4 machines because the DDR3 ones are out of production(like the OP), then dropping power and cooling would be a neat side bonus.

    And now, just because I'm curios: If the max DDR4 DIMM is 8GB, and there's 256GB per server, then that's 32 DIMMs. 32 times 1 to 2 watts less a DIMM would be 32 to 64 watts less load on the PSU. If the PSU is 80% efficient, then that should be 38.4 to 76.8 watts less at the wall per machine. Not really spectacular, but then you've also got cooling. If the AC is 80% efficient, that would be 46.08 to 92.16 watts less power to the AC. So in total, the new DDR4 server would cost you (wall draw plus AC draw) 84.48 to 168.96 watts lower load per server versus the discontinued DDR3 ones. Not very exciting if you've only got a couple of them, but I could see large server farms benefiting.

    Anyone know how to work out the KWh and resulting price from electric rates?
  • menting - Friday, February 6, 2015 - link

    100W for an hour straight = 0.1KWH. If you figure 10-20 cents per KWH, it's about 1-2 cents per hour for a 100W difference. That's comes to about $7-$14 per month in bills provided that 100W is consistent 24/7.
  • menting - Thursday, February 5, 2015 - link

    pattern recognition is one that comes to mind.
  • Murloc - Thursday, February 5, 2015 - link

    physical restraints of light speed? Isn't any minuscule parasitic capacitance way more speed limiting than that?
  • menting - Thursday, February 5, 2015 - link

    there's tons of limiting factors, with capacitance being one of those. But even if you take pains to optimize those, the one factor that nobody can get around is the speed of light.
  • menting - Thursday, February 5, 2015 - link

    i guess i should say speed of electricity in a conductive medium instead of speed of light.
  • retrospooty - Friday, February 6, 2015 - link

    Agreed if an app required high total bandwidth it would benefit.

    Now see if you can name a few that actually need that.
  • menting - Friday, February 6, 2015 - link

    simulation software, pattern recognition, anything that does a lot of data analysis and/or data transformation. Heck, whatever an SSD's BW is good for, high BW memory can also be good for
  • retrospooty - Sunday, February 8, 2015 - link

    Simulation and pat rec maybe, not that anyone uses it other than rare outliers... But Anything an SSD is good for? No, not at all. An SSD improves launch times for anything and everything from browsers to office apps, to graphic suite apps like Adobe CS to games. Everything that normal people do. HB ram improves almost zero for the vast majority of what people actually do with computers. Even what enthusiasts do.
  • menting - Monday, February 9, 2015 - link

    "SSD improves launch times for anything and everything from browsers to office apps, to graphic suite apps like Adobe CS to games. Everything that normal people do". Exactly my point. Everything that normal people do. All that an SSD does is to provide faster storage (granted, it's non-dynamic storage, unlike DRAM) such that when a CPU can't find the data in the cache nor the DRAM, it can go to it for data. If you have enough DRAM, all these software can reside in DRAM (provided that power does not go out). Also, a smart algorithm will know what software you used before so it will keep some parts in memory so the next time you launch it it will be faster. And the speed that it can relaunch will heavily depend on DRAM BW.
    If you want to talk about rare outliers, people who are serious enough Adobe users or gamers who really get affected in a major way by using a SSD vs a traditional HD are rare outliers. I won't be surprised if the number of those people are on par or lower than people that use simulation software or pattern recognition software.
    BTW, if memory BW doesn't make much of a difference, why do graphic cards go for GDDRx instead of plain DDRx?
  • retrospooty - Monday, February 9, 2015 - link

    "BTW, if memory BW doesn't make much of a difference, why do graphic cards go for GDDRx instead of plain DDRx?"

    You are seriously clouding the issue here. With a graphics card assuming you are utilizing it with a modern 3d game or such, would benefit immensely from memory bandwidth on the card... It hardly benefits at all from system RAM bandwidth. Like everything, memory bandwidth doubling while timing/latency more than doubles doesn't improve system performance much at all. Some cases ahve it diminishing performance. This is NOT about Video RAM, its system RAM and based on the Intel CPU architecture for the past 15+ years, improving memory bandwidth at the cost of latency doesn't help much...

    AGAIN, as I said in my original post - "We saw almost no improvement going from DDR400 cas2 to DDR3-1600 CAS10 now the same to DDR4 3000+ CAS [rediculous]
  • menting - Tuesday, February 10, 2015 - link

    if you just want to talk about system RAM, the biggest blame is on software and CPU architecture, since with the exception of a few, are not optimized to take advantage of BW, only latency, even when the usage condition is ripe for doing so.
  • retrospooty - Wednesday, February 11, 2015 - link

    This is an article about system RAM. Why would anyone be talking about video RAM? Agreed, it is a CPU architecture issue, however this is the world we live in and this is the CPU architecture we have... Intel is pretty much the top of the heap.
  • FlushedBubblyJock - Sunday, February 15, 2015 - link

    Don't worry there are endless thousands of bonerheads who cannot wait to be "sporting" DDR4.
    For most, if they are made to think it should be faster, it is, no matter what occurs in reality.
    I'd say 75% of it is how happy hyped their mental attitude is about how awesome their new bonerhead equipment is marketed to be, including any errors and rumors about what it is they actually purchased and installed, which they in many cases are not clear on.
  • xTRICKYxx - Thursday, February 5, 2015 - link

    This would be an interesting topic to return to when DDR4 becomes mainstream with higher speeds.
  • WaitingForNehalem - Thursday, February 5, 2015 - link

    Now this is an excellent article. Thank you!
  • ExarKun333 - Thursday, February 5, 2015 - link

    AMAZING article! Been waiting for this! :)
  • Flunk - Thursday, February 5, 2015 - link

    "There is one other group of individuals where super-high frequency memory on Haswell-E makes sense – the sub-zero overclockers."

    Yeah, I'm sure the 200 people on the planet who care about that are a real big market...

    Nice article overall though. I don't know why, but I was expecting more from DDR4. It looks like there is little reason to upgrade right now. Although I expect we'll all end up being forced into it by Intel.
  • Antronman - Thursday, February 5, 2015 - link

    There's a lot of consumers who want high clocked memory just because they want it.

    And there's more than 200 extreme overclockers on the planet.
  • galta - Thursday, February 5, 2015 - link

    The reason to upgrade today is not DDR4 per se, but 5xxx CPUs, and you might want these CPUs because of the extra cores, extra pci lanes, both, or just because you want it and can pay for it.
    These discussions over RAM get me tired. Rocks on the streets know that:
    a) fast memory makes close to no difference in real world, especially today with overclocking being so much more friendly than it was in the past
    b) whenever a new standard is introduced, it performs poorly when compared to previous standard. It was like this with DDR3 back in 2008 and it's the same today, but today you probably have less than 200 people saying they miss DDR2.
    Let's discuss more interesting and reasonable subjects.
  • Murloc - Thursday, February 5, 2015 - link

    200? You're severely underestimating the number of people who do that.

    Also why do car companies make cars that are going to be driven by just a few sheiks?

    With rams it's probably even easier given that you just have to bin chips and there are people who buy them just because they want the best. That's why they put increasingly cooler heatsinks and packages on the more pricey sticks. Not because they really need additional cooling in non-extreme use cases.
  • FlushedBubblyJock - Sunday, February 15, 2015 - link

    Because the elite cater to the elite, and the clique' is small and expensive, and leeches off the masses for the advantage and opulent and greedy lifestyle and media hype and self aggrandizement.
    They can fly each other around the world for huge parties and giveaway gatherings called global contests and spend enormous sums and feel very important.
  • imaheadcase - Thursday, February 5, 2015 - link

    Wait a tick, DDR2 is 800+mhz. That is what its default to on both my systems.
  • imaheadcase - Thursday, February 5, 2015 - link

    You put 200-533 MHz. My is actually at 936mhz for the overclock to.
  • ZeDestructor - Thursday, February 5, 2015 - link

    DDR = Double Data Rate, i.e: two operations are done per clock cycle. Thus the frequency is 400, but the effective frequency is 800. Same applies for DDR1-DDR4.

    GDDR5 is crazier: 4 operations per clock cycle, so 1750MHz works out to 7000MHz effective.
  • Murloc - Thursday, February 5, 2015 - link

    so basically what we knew all along: many enthusiasts are just wasting their money. The same goes for size although few people who build PCs are that stupid when it comes to this, it's mostly gamers who buy pre-built PCs who fall into this trap (it's not like they have much of a choice anyway, everybody is selling computers with lots of RAM and a pricey CPU bottlenecked by a weak GPU because it makes them money).
  • fredv78 - Thursday, February 5, 2015 - link

    seems to me most benchmarks are within the error margin (which is usually up to 3% and ideally should be quoted)
  • galta - Thursday, February 5, 2015 - link

    Yes, yes, it is wrong: whoever spends money on "enthusiast" RAM has more money than brains, except for some very specific situations.
    The golden rule is to buy a nice standard RAM from a reputable brand and use the savings to beef-up your CPU/GPU or whatever.
  • Murloc - Thursday, February 5, 2015 - link

    yeah but e.g. with corsair ram I always bought the mainstream XMS one instead of the Value Select sticks, but given that I haven't done any tweaking in my last rig, I might just as well have bought the cheaper one without the heatsinks.

    Maybe in my next build I will do that if there is a significant price difference.
  • galta - Thursday, February 5, 2015 - link

    You just proved my point: crucial is pretty reputable and they have no thrills RAM that are generally the cheapest on the market.
    Corsair is always fancy ;)
  • Kidster3001 - Friday, February 6, 2015 - link

    The word "Enthusiast" with respect to computers is synonymous with "Spends more than they need to because they want to." If you're making the Price/Performance/Cost purchase then you are not an Enthusiast. Every year I spend money on computer stuff that I do not need. Why? Because I am an Enthusiast. You may consider this "wasting money", perhaps it is. I don't "need" my 30" monitor or my three SSD's or my fancy gaming keyboard and mouse. I did spend money on them though. It's my hobby and that's what hobbies are for.... spending money you don't need to spend.

    Stick with your cost conscience, consumer friendly computer parts. They are good and will do what you need them to do. Just don't ever try to call yourself an Enthusiast. You'll never have the tingly feeling of powering up something that is really cool, expensive and just plain fun. Yeah, it costs more money but in reality, that's half the fun. The tingly feeling goes away in a month or so. That's when you get to go "waste" more money on something else. :-)
  • sadsteve - Friday, February 6, 2015 - link

    Hm, I don't necessarily agree with you on size. With the size of digital photos today, a large amount of RAM gives you a lot more editing cache when Photoshopping. I would also imagine it's useful for video editing (witch I don't do). For all my regular computer use, yeah 16GB of RAM is not too useful.
  • Gunbuster - Thursday, February 5, 2015 - link

    So a 4x4 2133 kit for $200 or a 3333 kit for $800 and 2% more speed in only certain scenarios. Yeah seems totally worth $600 extra.

    You could buy an extra Nvidia or two AMD cards for that and damn sure get more than 2-10% speed boost.
  • FlushedBubblyJock - Sunday, February 15, 2015 - link

    Shhh ! We all have to pretend 5 or 10 dollars or maybe 25 or 50 is very, very ,very very important when it comes to grading the two warring red and green video cards against each other !
  • just4U - Thursday, February 5, 2015 - link

    Is there no way for memory makers to come up with solutions where they improve the latencies rather than the frequencies? The big numbers are all well and good at the one end but the higher you go at the other end offsets the gains.. at least that's the way it appears to me.
  • menting - Thursday, February 5, 2015 - link

    there is. The latency is due to physical contraints, so you can improve it by stacking (technology is just starting to slowly become mature for this), or by reducing the distance a signal needs to travel, which is done by smaller process size as well as shortening the signal distance (smaller array, smaller digit lines, etc). But shortening the signal distance comes at a cost of either|or|and smaller DRAM density, more power, etc, so companies don't really do it since it's more profitable to make larger density DRAM and/or lower power DRAM. The only low latency DRAM I know of is the RLDRAM, which has pretty high power and is fairly expensive.
  • ZeDestructor - Thursday, February 5, 2015 - link

    That, and with increasingly larger CPU caches, less and less of an issue as well.
  • JlHADJOE - Thursday, February 5, 2015 - link

    Will be interesting to see another article like this when we have CPUs with integrated graphics and DDR4.
  • OrphanageExplosion - Thursday, February 5, 2015 - link

    "For any user interested in performance, memory speed is an important part of the equation when it comes to building your next system."

    Doesn't your article actually disprove your initial statement?

    And surely your gaming benchmarks might make more sense if - once again - you actually tested CPU intensive titles as opposed to the titles you've tested? The GPU will barely touch your expensive DDR4, if at all.

    The only scenario I can see DDR4 making a real difference will be in graphics work with AMD APUs, and even then we'll need to see really high-end, fast kits that should just about offer comparable bandwidth with the slowest GDDR5 to offer a literally game-changing improvement.
  • Sushisamurai - Thursday, February 5, 2015 - link

    Errr... Memory speed did make a difference (small IMO) when it came to DDR3. This article tests if it holds true to DDR4 - however, without an iGPU the other tests don't really show a significant difference when price is factored in. I mean, sure, there's a difference, but not worth the price premium IMO.

    A future AMD comparison would be nice, when AMD decides to support DDR4... Otherwise, it was a nice article.
  • FlushedBubblyJock - Sunday, February 15, 2015 - link

    That's called the "justify wasting my life to write this article, tag and hook and sinker line, plus the required tokus kissing to the kind manu's that handed over their top tier for some "free" advertising and getting out the word.

    It's not like the poor bleary eyed tester can say: " I didn't want to do this because one percent difference is just not worth it, my name is not K1ngP1n and I'm not getting 77 free personal jet flights this year to go screw around in nations all over the world.
  • vgobbo - Thursday, February 5, 2015 - link

    I really enjoyed this review!

    But... Intel processors are massive cache beasts, which reduces a lot the pressure put on memory (except for games, which I believe was the most interesting part of this review). Said that, I wish to see a review on an AMD system, which have a lot weaker cache structure and memory buses.

    Is this possible to happen, or I'm just a dreamer? ;D

    Anyway, this was another outstanding review of Anandtech! Loved it! Thank u guys!
  • dazelord - Thursday, February 5, 2015 - link

    Interesting, but isn't Haswell-E/X99 accessing the memory in 256bit mode using 4 dimms? I suspect the gains would be much more substantial in 128bit/ 2 dimm systems.
  • willis936 - Thursday, February 5, 2015 - link

    Good stuff but after seeing a fair bit of memory roundups in my time I think this mostly confirms what everyone has been thinking: DDR4 is incredibly underwhelming in the performance space. You not only get better bang for buck with DDR3 right now but comparable, if not better, performance in the high end kits.
  • galta - Thursday, February 5, 2015 - link

    You've got it wrong. Nobody goes for DDR4 because of the memory, it's because of the new CPU and chipset.
    Ask yourself: do you really need extra cores and/or pci lanes? Or, do you want them and have the money to pay for it? If the answer is "yes" than you'll go for 5xxx and DDR4 is incidental.
    Otherwise, go 4xxx and DDR3 will also be incidental.
    It makes no sense to talk about memory as if it could be chosen independently from CPU/chipset.
  • rmh26 - Thursday, February 5, 2015 - link

    Ian could you post more information about the NPB fluid dynamics benchmark. Specifically which benchmark CG, EP, FT ... and which class problem S, W, A, ...etc. In my own research I have found the simulation time to scale nearly linearly with the memory frequency for large enough problems. I am wondering how much the cache has to do with masking the effects of memory frequency on performance. As a the size of the problem gets larger the cache will no longer be able to mask the slowness of the memory. In general memory, and moreover interconnects between computers play a very important role in some HPC applications the rely on solving partial differential equations. In fact there have been suggestions to move away from the standard HPC Linpack benchmark used to create the top 500 lists as this compute intensive benchmark does not accurately reflect the load placed on supercomputers.

    http://insidehpc.com/2013/07/replacing-linpack-jac...
  • Dasa2 - Thursday, February 5, 2015 - link

    Congrats anandtech you screwed up another ram review further misleading people

    The games you chose to review are so badly GPU bottlenecked its sad. Do you not know that ram performance affects cpu performance?

    You could run Dirt 3 with a i3 2100 vs a 5ghz 5960x and get the same score
    How about putting some different CPU in amongst your ram benchmarks like 4460-4690 5820-5960x so people can see how faster ram compares to spending more on the CPU...

    A 4690k with 1600c11 ram can perform slower in games than a 2500k with 2133c9 ram
  • Dasa2 - Thursday, February 5, 2015 - link

    To back up some of what i said here is a few links

    I3 2100 matching 2500k@4ghz in dirt 3
    http://www.tomshardware.com/reviews/gaming-fx-pent...

    Arma a cpu bottlnecked game where a 2600k@4.3ghz with 2133c9 ram is faster than at 4.9ghz with 1600c11
    http://forums.bistudio.com/showthread.php?166512-A...

    Thief CPU|RAM performance
    http://forums.atomicmpc.com.au/index.php/topic/557...

    Bf4 1600c9=60fps 2400c10=70fps
    http://www.team-greatbritain.com/call-of-duty-ghos...

    Xbit ddr3 review looks a bit different to yours...
    http://www.xbitlabs.com/articles/memory/display/ha...
  • Margalus - Friday, February 6, 2015 - link

    And not one of those is using ddr4...
  • Dasa2 - Friday, February 6, 2015 - link

    Nope hence why I would like a decent review site like anandtech to do a proper job of there ddr4 review
    Im not expecting a big of a difference from higher speeds quad channel ddr4 by comparison to what can be seen in dual channel ddr3 but even there haswell ddr3 tests showed jack all due to the same problem with there tests so how can we know for sure
  • FlushedBubblyJock - Sunday, February 15, 2015 - link

    You're correct, you made your points, so of course someone without many watts currently on display there said something silly, as usual being stupid pays off and those not dumbed down to base below average levels suffer the frustrating beyond belief consequences.
  • mrcaffeinex - Friday, February 6, 2015 - link

    The problem is that we currently do not have a non-enthusiast platform available that supports DDR4. The new X99 platform is also running quad-channel, so the best comparison to a prior platform would have to be using X79 (attempting to keep as close to apples to apples as possible). The point that can be taken from this article as it is right now, is that you can skip buying insanely-priced DDR4-3000+ memory because your X99 rig will probably not perform noticeably different with DDR4-2133.

    As the process matures and more systems adopt DDR4, then you'll be able to do a better comparison across multiple performance levels, but as it is right now, if you're buying into X99, you're buying a high-end CPU. I look forward to the extensive comparative tests that you have mentioned, but I do not see them happening until either the mainstream platform (LGA 115x) is running DDR4 or AMD has any offering that supports DDR4.
  • Dasa2 - Friday, February 6, 2015 - link

    Unfortunately you cant take that from this article as the gaming tests wouldnt show if there was any gain from faster ram even if it did boost cpu performance by 15%
    These tests were worse than a complete waist of time from a gaming perspective as they could be very misleading
    At a guess i would expect to see somewhere between 3-7% difference going from ddr4 2133 to ddr4 3200 at the same timings although most of that gain will probably be between 2133 and 2666 happy to be proven wrong though
  • Sushisamurai - Friday, February 6, 2015 - link

    although I agree it would be nice to see the impact DDR4 timings and speeds on CPU bound games, I unfortunately don't see the real world application to it. With DDR4, we're working on Haswell-E, which already has a lot of compute power - if we were to run into any CPU bottlenecks, wouldn't it make more sense to spend more of the budget into the CPU instead of RAM? Unless, you had enough money to buy top CPU and top RAM, then the point becomes quite moot no?
  • Dasa2 - Friday, February 6, 2015 - link

    Depends how big the gain is from faster ram doesnt it and we wont know that until its tested properly with the ram speed compared cpu speeds to
    Testing cpu or ram performance with gpu bottleneck games is a waist of time unless your AMD trying to sell fx8150...

    The only cpu limited games at this stage on Haswell-E will be the ones with bad multithreading support so spending a heap more on the cpu for extra cores from the 5960x wont help
    What will help is spending extra for a better overclock and maybe faster ram but how far do you go
  • tim851 - Friday, February 6, 2015 - link

    > The games you chose to review are so badly GPU bottlenecked its sad.

    That's why they were running these games at reduced resolutions and IQ settings, Einstein.

    What game should Anandtech benchmark that is NOT GPU LIMITED - Quake 3 Arena?
  • Dasa2 - Friday, February 6, 2015 - link

    They shouldnt reduce detail settings just no aa and resolution to 1080p while running a gtx980 or two (r9-290\gtx970\gtx780oc minimum)
    But with the likes of dirt 3 even if they do reduce detail settings its still gpu bottlnecked
    Arma\Dayz are some of the only games that can be cpu bottlnecked with a single gtx770

    Dying Light is very demanding on both cpu and gpu
    http://translate.googleusercontent.com/translate_c...

    There is a lot of games that can be a bit of a blend of cpu\gpu limitation with enough gpu power although most of these will run 60fps fine on a 5820k a fair few of them wont do 120-144fps
    http://translate.google.com/translate?depth=6&...
    As they are a blend there limitation can vary from one part of the game to the next for example testing BF4 SP although easier to get consistent results will be far more gpu limited than MP some levels will also be more gpu limited than others
    This is why i suggest putting different models and clocks speeds of cpu in against ram speed results so that people can see where the limit really is and where money is best spent
  • Tunnah - Thursday, February 5, 2015 - link

    Solid data I can use to stop myself being impulsive and upgrading my rig, thank you!

    Every now and again I get upgrade pangs, trying to justify it with numbers, and this article does a great job of showing what I already know - my system is fine, an upgrade will only show results on paper.

    *Doffs cap*
  • HiTechObsessed - Thursday, February 5, 2015 - link

    Just further proof that faster (more expensive) RAM doesn't do anything for gaming. I laugh when people buy Dominator Platinums for 2x or even 3x the cost of regular Corsair or G Skill for solely gaming rigs.
  • FlushedBubblyJock - Sunday, February 15, 2015 - link

    Despair not, one must understand that inside that stupid thick skulll, and beneath that irritating idiot bragging because he's so stupid he doesn't know any better, the doofus is happy, because he is so thick and so easily parted with his less than adequate money supply.

    So bottom line is every time dummy sits down to game, his moron noggin gets all fired up and happy because ignorance in that case is bliss.
  • MrSpadge - Thursday, February 5, 2015 - link

    This calibration at boot slowing the process down 5-8s: can't the system save the proper values from the last boot and start optimization from this point on? Wouldn't those values change only slowly, e.g. when the module is aging or their amount is changed?
  • name99 - Thursday, February 5, 2015 - link

    I understand that the goal here is to test the PAIR of Haswell-E and DDR4.
    However, when it becomes practical, might I suggest that you try for a comparison of
    (easier) AMD and DDR-4
    (harder) one of the ARM server chips and DDR-4

    The reason I suggest this is that we all know that Intel, especially on Xeon, has the best cache+memory controller subsystem in the business, which, by design, means they're the least helped or hurt by changes to DIMM performance. Vendors whose memory subsystems are not as spectacular will likely see larger swings in performance, and it would be of interest to see how large those swings are (which, in a way, also tells us something about the gap between these vendors' memory subsystems and Intel).
  • MikeMurphy - Thursday, February 5, 2015 - link

    I'm flood that precise timings aren't built into the eeprom for each system to use. Why is XMP even necessary with DDR4??
  • davidthemaster30 - Thursday, February 5, 2015 - link

    I would have liked to see DDR3 clocked to 2133 15-15-15 (like the JEDEC DDR4 spec) vs DDR4 at the same speeds in single, dual, triple and quad channel to see scaling from DDR3 to DDR4 and from the # of channels. Also in the DDR3 vs DDR4 page, the specs for DDR4 are "DDR4-2133 14-14-14 350 2T" but I'm pretty sure that 350 is supposed to be 35 ... and the speed of the DDR3 for those tests is not stated.
  • Ranger101 - Friday, February 6, 2015 - link

    A very detailed, well written article, but for me, somewhat academic as
    the conclusion in comparative memory articles always seems to be the
    same."There are a few edge cases where upgrading to faster memory makes
    sense."
  • galta - Friday, February 6, 2015 - link

    Yes, because this is the only logical conclusion.
    Having said that, the community should probably stop discussing RAM, at least until we get to DDR9
  • menting - Friday, February 6, 2015 - link

    that means never discussing RAM again :)
  • Harry Lloyd - Friday, February 6, 2015 - link

    So no difference whatsoever no matter which test? Not surprising, considering the quad channel controller.
    I hope to see a similar test when dual channel Skylake comes out. Also, please find some CPU-bound games. BioShock, Tomb Raider and Sleeping Dogs do not need more than two cores, which makes them completely pointless for this kind of test. Try games like Battlefield 4 MP or Dying Light (extremely CPU-bound and easy to repeat).
  • Arbie - Friday, February 6, 2015 - link

    @nwrigley - I also agree. I have a 2008 build using a Yorkfield quad at 3.6GHz, still running 32-bits and the original 4GB of DDR2. The three things I have really needed to add since then are SSDs, a new graphics card (expected), and adapters for USB3 ports. All of these are "bolt-on", not fundamental changes, and the only one I researched was the gfx board. I know a Haswell build would be 2x more powerful and run much cooler, but neither of those justifies a system replacement. I almost never max out the CPU, or even the RAM.

    This "good enough" syndrome is obviously affecting the industry, and even the websites dealing with it. One well established and very good equipment review site has recently gone, probably because too few people still care about small differences in desktop motherboard, PSU, DRAM, and cooler performance. I suppose this trend will continue.
  • jabber - Friday, February 6, 2015 - link

    I have to admit I stopped looking seriously at RAM reviews once we hit DDR2. I wince when I see a reviewer has wasted a week of their life to do a DDR3 'performance' RAM round up. Well thanks for telling us AGAIN that there is a performance difference of 2% or 0.5FPS between stock $50 RAM and the $300 top of the range. Why do they keep doing RAM group tests?
  • nwarawa - Friday, February 6, 2015 - link

    It wasn't very clear, but it sounded like the ddr3/4 comparison was dual channel vs quad channel. A better apples to apples test would run the x99 system is dual channel.
  • halcyon - Friday, February 6, 2015 - link

    TL;DR: Does NOT scale.

    The price difference between 2133 and any of the higher speeds makes no sense, unless you are a super-high res competitive pro-gamer or if you run real-time intensive huge dataloads 24/7.

    For even heavy users, workstations, etc - no point. Just buy the most reliable 2133 or 2400 that is the cheapest.

    Last graph is horrible, baseline doesn't start from zero. Differences are minimal.

    Sad is the day when the element of interest for pro users is : "Firstly is the design, and finding good looking memory".
  • jnkweaver - Friday, February 6, 2015 - link

    So for example, when given DDR3-2133 C10 (PI of 213) against DDR3-1866 C10 (PI or 187), the first one should be chosen. However with DDR3-2133 C10 (PI of 213) and DDR3-2400 C12 (PI of 200) at the same price, the results would suggest the latter is a better option.

    So 213 beats 187 (1st example) but 213 doesn't beat 200? (2nd example)
  • Wwhat - Saturday, February 7, 2015 - link

    So from the looks of the tests the speed absolutely makes no difference, but now what I'm wondering is what happens if you have many things running at the same time, several programs simultaneously, maybe that will bring some differences to light? Or is there really no difference at all? That seems a bit odd, and a flaw in the CPU design since it can't utilize the extra speed. The RAM speed is suppose to be a bottleneck for the CPU after all.

    Maybe we should hear some comments on the subject from intel and AMD.
  • DarkXale - Saturday, February 7, 2015 - link

    Its not at all a flaw; on the contrary its all about intelligently predicting what data we need to have access to soon.
  • gsuburban - Saturday, February 7, 2015 - link

    DDR4 is not that much a performance change and 4 times the cost so, DDR3 will still be around.
    It's overpriced RAM in the least.
  • YoloPascual - Sunday, February 8, 2015 - link

    DDR4 = half doa tech
  • wyewye - Sunday, February 8, 2015 - link

    Extremely weak review.

    Ian, is this your first memory review?
    Everyone knows in the real world apps the difference is small. Whats the point to show a gazilion of charts with 1% differences. You had way more random noise from the tests errors, those numbers are meaningless.
    For memory, the syntetic tests is the only way.

    Thumbs down, bring back Anand for decent reviews.
  • wyewye - Sunday, February 8, 2015 - link

    @Ian
    ProTip: when the differences are small and you get obviously wrong results like 2800@cl14 slower than 2133@cl16, run 10 or 20 tests, eliminate spikes and compute the median.
  • wyewye - Sunday, February 8, 2015 - link

    Ian stop being sloppy and do a better job next time!
  • Oxford Guy - Sunday, February 8, 2015 - link

    "Moving from a standard DDR3-2133 C11 kit to DDR4-2133 C15, just by looking at the numbers, feels like a downgrade despite what the rest of the system is."

    Sure... let's just ignore the C10 and C9 DDR3 that's available to make DDR4 look better?
  • eanazag - Monday, February 9, 2015 - link

    Why not post some RAM disk numbers?

    What I saw in the article is that the cheapest, high capacity made the most sense for my dollar.
  • SFP1977 - Tuesday, February 10, 2015 - link

    Am I missing something, or how did they over come the fact that their 2011 test processor has 4 memory lanes while that 1150 processor has only 2??
  • deanp0219 - Wednesday, February 11, 2015 - link

    Great article, but in fairness, you're comparing the first run of DDR4 modules against very well developed and evolved DDR3 modules. When DDR3 was first released, I'll bet some of the high-end DDR2 modules available at the time matched up with them fairly well. We'll have to see where DDR4 technology goes from here. Again, great read though. Totally not a reflection on the article -- nothing you can do about the state of the tech. Made me feel better about my DDR3-2133 machine!
  • MattMe - Friday, July 10, 2015 - link

    Am I right in thinking that the benefits of DDR4 outside of power consumption could well be in scenarios where integrated graphics are being utilised?

    The additional channels and clock speeds are more likely to have an effect there than an external GPU, I would assume. But we're still yet to see any DDR4L in the consumer market (as far as I'm aware), it's most beneficial area.

    Seeing some benchmarks including integrated graphics would be very interesting, especially in smaller, lower powered systems like a NUC or similar.
  • LorneK - Monday, October 5, 2015 - link

    My gripe with Cinebench as a "professional" test is that aside from tracing rays, it in no way resembles the kind of rendering that an actual professional would be doing.

    There's hardly any geometry, hardly any textures, no displacement, no advanced lighting models, etc.

    So yeah, DDR4 makes barely any impact in Cinebench, but I have to wonder how much of that is due to Cinebench requiring almost nothing from RAM in general.

    Someone needs to come along and make a truly useful rendering benchmark. A complex scene with millions of polygons, gigs of textures, global illumination, glossy reflections, the works basically.

    Only then can we actually know what various aspects of a machine's hardware are affecting.

    An amazing SSD would reduce initial scene spool up time. Fast single thread performance would also increase render start times. Beefy RAM configs would be better at feeding the CPUs the multiple GBs needed to do the job. And the render tiles would take long enough to complete that a 72 thread Xeon box isn't wasting half its resources simply moving from tile to tile and rendering microscopic regions.
  • Zerung - Tuesday, February 9, 2016 - link

    My Asus Mobo notes the following:
    'Due to Intel® chipset limitation, DDR4 2133 MHz and higher memory modules on XMP mode will run at the maximum transfer rate of DDR4 2133 Mhz'. Does this mean that running the DDR4 3400 CL16 may not give me the latency below 10?
    Thanks

Log in

Don't have an account? Sign up now