It looks like you guys are re running all the benchmarks in the original review then, right? I see that the results look to be changed and less CPUs are on the lists (since you haven't rerun them all, I assume)
It suggests that their implementation could probably be made less impactful than it currently is; but that high precision timers have had a performance impact has been known for a long time. In its guise as the multi-media timer in Windows over a decade ago the official MS docs recommended using lesser timing sources in lieu of it whenever possible because it would affect your system.
What's new to the general tech site reading public is that there are apparently significant differences in the size of the impact between different CPU families.
But is there a 'real' performance impact or does default HPET behavior simply introduce a fudge factor that alters how the tools report the numbers? Is there a way to verify the results externally?
I'm wondering about the same thing. Do the games' frame rate really change (they get smoother or vice versa) or the timer just messes up the numbers reported by benchmarks and the games' actual frame rate that reaches the display doesn't change?
I'd be more concerned that Intel has found a way to make the timer report false benchmarks that are higher than they actually are. I'd also be curious if the graphics card/cpu combination is potentially at fault.
Nvidia has been shown to cheat in the past on benchmarks by turning off features in certain games that are used for benchmarking to boost the score. Is Intel doing something similar?
I came across a similar issue on VMware, where a virtual machine's clock would drift out of time synchronisation. The cause of this was that VMWare uses a software based clock and when a host was under heavy CPU load the VM's clock wouldn't get enough CPU resource to keep it updated accurately. This resulted in time running 'slowly' on the virtual machine.
Under normal circumstances this kind of time driftissue would be handled by the Network Time Protocol daemon slewing the time back to accuracy; the problem is the maximum slew rate possible is limited to 500 parts-per-million (PPM). Under peak loads we were observing the VM's clock running slow by anywhere up to a third. This far outweighed the ability of the NTP slew mechanism to bring the time back to accuracy.
If this issue has the same root cause, the software based timers would start to run slowly when the system is under heavy load. Therefore more work could be completed in a 'second' due to it's increased duration. It would be interesting to know if the highest discrepancy were also the ones with the largest CPU loads? Looking at the gaming graphs on page 4 the biggest differences are at 1080p which suggests this might be the case.
You also had Idle issue with windows servers where time would drift., the high load I've never heard of in our company we have thousands upon thousands of vm's using vmware though.
Yeah like AMD was using less demanding tessellation to boost their score in benchmarks or less demanding AF ignoring completely what an application set. . Oh yeah, I forgot, when the same thing happened to AMD, it was a bug, when to Nvidia, cheating. Those double standards are hilarious.
The overhead of HPET causes the Intel CPUs to effectively slow down with the combination of the Meltdown+Spectre fixes. HPET is a system call which results in the harshest penalty of the fixes for Meltdown+Spectre. Add to the fact that Intel's implementation of HPET is higher fidelity (i.e. higher speed clock rate) than the specification requires, and then combine that higher fidelity with creating an even larger load on the CPU (due to Meltdown+Spectre), and it creates the large performance degradation.
The other timers (TSC + lapic) do not incur as high a penalty, as these do not result in system calls which need to be protected from Meltdown/Spectre exploitation.
The higher clock the HPET runs at for intel should make absolutely no difference - it's the cost of reading the timer which counts, the rate it's running at should not be relevant. (Although I higher frequency may have higher hw implementation cost.). For this kind of slowdown as shown in some games though there have to be LOTS of timer queries. But I suppose it's definitely possible (I suspect nowadays everyone uses the TSC based queries and forget to test without them being available), which are much faster in hw and don't require syscalls. Meltdown (and probably to a lesser degree Spectre) could indeed have a big impact on performance with HPET (if there's that many timing queries). I'd like to see some data with HPET but without these patches.
I believe, from my own testing, that it's merely a factor of reporting. HPET has always resulted in a smoother, faster, system with less stutters when I enable it.
I also use SetTimerResolutionService to great effect.
It may be more *responsive*, yet able to do less work. In fact, speed and latency can be opposites - if you never pick your head up while doing a task, you'll probably execute it in the fastest possible time, at the expense of anything else that you might have wanted to do during that time. Most interactive users don't appreciate the computer not paying attention to them, so desktop computers tradeoff for speed for reduced latency - although with multi-core systems the impact is small.
In the past when the TSC was not present, HPET also moved the system more towards being a real-time system, and the cost of that the overhead of the more frequent timer checks. Nowadays, especially with nonstop TSC (not impacted by power management), I'm not sure that is the case, but using it might still change latency - for good or ill.
Enabling HPET does not mandate forcing its use over the alternatives, but mandating it with 'bcdedit /set useplatformclock true' does. You can do the equivalent on Linux. And clearly, there is a cost, although that cost varies greatly depending on the platform and what you are doing.
Over the years I've seen numerous cases of people trying to reduce the number of gettimeofday calls to increase performance, and the cost of checking the HPET is probably one of the reasons.
"But is there a 'real' performance impact or does default HPET behavior simply introduce a fudge factor that alters how the tools report the numbers? Is there a way to verify the results externally?"
Yes, you can compare the clocks with various applications, including the Timers applications. Which in our case shows that neither the ACPI nor QPC timers are drifting versus the HPET timer. So the performance difference really is a performance difference, and not a loss of timer accuracy.
Forced HPET as an example will disable optimised interrupts for network cards, the AMD and intel review support staff probably dont have the knowledge to correctly advise you. I suggest you contact microsoft.
Also your cpu's tested, what decides what cpu's you decided to compare to? the AMD review guide or something else? Was a lot of omitted CPU's and noticeably no manual overclocked intel cpu's in your data.
See my first reply, if AMD software is forcing HPET and especially if they not informing the user the consequences of doing this then thats very irresponsible.
run cmd prompt as an administrator bcdedit /enum if there is a line like "use platformclock on" , then it is forced on if no line, then it is off BIOS is a "off" / "available" toggle, so there is no "on" in bios
I'm probably misinterpreting this, and feel free to correct me, but initial read of the title would suggest to the common user that "AMD Ryzen 2 has an issue with its internal timer", but upon reading, not only does Ryzen 2's HPET have a minimal performance hit compared to default behavior, but Intel's _own_ HPET is the variant with the LARGEST performance penalty.
This information would imply that the Ryzen 2 benchmarks done so far only stand to gain a bit of performance (under default configurations), but nothing more than ~15% on very select game titles at 1080p, and less than ~5% elsewhere. (This is a good thing, as the good Ryzen 2 benchmarks only stand to gain a bit by measuring performance with HPET off.) It also then implies that Intel should likely be reviewing their HPET implementation to ensure that there's a minimal performance hit for applications which use this feature.
"I'm probably misinterpreting this, and feel free to correct me, but initial read of the title would suggest to the common user that "AMD Ryzen 2 has an issue with its internal timer", but upon reading, not only does Ryzen 2's HPET have a minimal performance hit compared to default behavior, but Intel's _own_ HPET is the variant with the LARGEST performance penalty."
Ultimately this article was a follow-up to our Ryzen 2 review, so it was necessary that it referenced it. (However the use of an Intel CPU picture was also very intentional)
I feel like what this ultimately means is Intel has issues with HPET, and that the results everyone else are getting are the problematic ones, not you guys. By forcing a more precise timer, intel's... I dunno... "advantage" as it were is eliminated.
Seriously, AMD is 1% or less variance despite the timer used. Intel is upward of 30% or more. To me, that is a giant red flag.
The question I suppose I have is, are the results even real or legit at that point. Why does the intel suffer tremendously when using an accurate timer, and pulls ahead when not?
From my reading of the article, it seems that Intel takes a larger hit because they use a more accurate HPET timer (24Mhz on the 8700K), and thus it is more taxing on the system. The calls are very much under the umbrella of things more negatively affected by Spectre, and as the i7 8700K system had an HPET rate at closing on 2x as much as the R7 2700K, it stands to reason the i7 is going to benefit much more from it being turned off.
tl;dr, the more accurate timer is much more needy on the system, and the system under spectre/meltdown takes an even larger hit at the IO calls to it.
The HPET can run at a higher frequency without generating more CPU overhead, because it's really just a counter. Making that counter's value grow more quickly doesn't mean the CPU gets more interrupts per second.
Because it makes perfect sense. Intel's losing more clock cycles since it is at vastly higher clock speeds, and it has Meltdown to contend with on top of Spectre. HPET from my cursory reading is 4 system calls compared to just 2 for TSC+lapic. The performance hit of that should then surprise no one.
With AVX-512, Intel has a lot of very high throughput instructions that AMD doesn't. If your software uses them, Intel pulls ahead vs. the best equivalent you could write for Epyc. That's not fishy. You're just taking the more optimal path to solving your problem. When Cascade Lake X and Cannon/Ice Lake arrive, this will all be fixed at the hardware level and the overhead will disappear.
Except that isn't actually true in practice for a wide variety of actual AVX-512 enabled workloads. Running those insanely wide registers drastically increases power draw & thermal ouput and as a result clock-speeds take a nose-dive. In certain SIMD workloads capable of AVX acceleration, this clock-dropoff is so large that EPYC outperforms Skylake-X's AVX-512 support using much much narrower AVX2 instructions/registers simply because it can maintain vastly higher clock-speeds during the load.
If you're using the widest registers, yes, but there were also a lot of 128 and 256-bit extensions added that were missing from the AVX/2 stack. And Intel will bring the power draw down and the clocks up over time.
The HPET, despite its name, is not more accurate. The TSC timer is accurate to CPU clock-cycle precision, which is usually more than two orders of magnitude better than the HPET.
The difference between accuracy and precision is probably important here. TSC is definitely far more precise, but overclocking can make it much less accurate.
The TSC is *clock cycle* accurate but not *real time* accurate. It speeds up and slows down relative to real time with changes in CPU clock speed; such as what CPUs do on their own when system power state changes.
That is, when a hypothetical 1.6GHz chip downclocks to 800MHz, the TSC's rate relative to real time is cut in half.
No, that was true maybe 10+ years ago. There's several flags to indicate TSC properties: - constant (fixed clock rate, but may be halted depending on C-State) - invariant (runs the same independent from C-State)
All intel cpus since at least Nehalem (the first Core-i chips) should support these features (not entirely sure about AMD, probably since Bulldozer or thereabouts).
The TSCs are also usually in-sync for all cpu cores (on single socket systems at least), albeit I've seen BIOSes screwing this up majorly (TSC reg is allowed to be written, but this will destroy the synchronization and it is impossible to (accurately) resync them between cores - unless your cpu supports tsc_adjust meaning you can write an offset reg instead of tsc directly), causing the linux kernel to drop tsc as a clock source even and using hpet instead (at least at that time the kernel made no attempt to resync the TSCs for different cores).
So on all "modern" x86 systems, usually tsc based timing data should be used. It is far more accurate and has far lower cost than anything else. If you need a timer (to generate an interrupt) instead of just reading the timing data, new cpus actually support a tsc_deadline mode, where the local apic will generate an interrupt when the TSC reaches a programmed value.
FWIW I think the reason Ryzen Master (and some other software for OC) requires HPET is because, while the TSC frequency is invariant, it might not be as invariant as you'd like it to be when overclocking (though Ryzen Master has the HPET requirement fixed a while ago). Ryzen PPR manual (https://support.amd.com/TechDocs/54945_PPR_Family_... says that the TSC invariant clock corresponds to P0 P-State - this would be cpu base clock. So then naturally from that follows if you were to change the base clock for overclocking, the TSC clock would change too, causing all sort of mayhem since likely the OS is going to rely on TSC being really invariant (as it's announced as such). That said, this manual says (for MSRC001_0015) there's a "LockTscToCurrentP0: lock the TSC to the current P0 frequency" bit. It does just what the doctor asked for: "LockTscToCurrentP0: lock the TSC to the current P0 frequency. Read-write. Reset: 0. 0=The TSC will count at the P0 frequency. 1=The TSC frequency is locked to the current P0 frequency at the time this bit is set and remains fixed regardless of future changes to the P0 frequency." So maybe they are now setting this (or maybe they always set this and requiring HPET had other reasons...).
Because Intel has mitigated Spectre microcode available, and no such thing is available for AMD yet. Intel is paying that context switching overhead and AMD isn't (yet).
It's nothing todo with the accuracy of HPET, but the cost in reading HPET. Reading HPET is an IO operation and system call, which means you hit the Meltdown mitigation penalty, something that AMD does not suffer from.
no forcing HPET is a very unusual config, no modern OS has it as the default time on modern hardware. Not only is it slower but also things like msi-x require LAPIC to work.
"forcing HPET is a very unusual config," Actually I think it may be very common. For example, based on what I read, in the article my system will have it forced and I never even knew it (will check it as soon as I get home). See, I always do my overclocking from the bios/uefi stings. But a while back, just for grins I decided to try out Ryzen Master. I messed around with it for a while, didn't really like it, and uninstalled it. But the story says when I installed it and rebooted, then it was forced on and that setting did not go away when I uninstalled Ryzen Master. So, essentially anyone who has ever used Ryzen Master or a similar tool will be forced on unless they knew enough to turn it off. I certainly had no clue how this worked and I'm betting that most other people were as clueless as me.
"In short what anandtech did here is "very bad"." Or you could interpret it as very good because all of us who had no clue have now learned something :)
Indeed, all of this could explain why I had some weird results a couple of years ago when testing certain setups with a 980 Ti, I think one of the oc tools may have forced HPET in the manner described. I need to check. To be precise, with hindsight it correlates with the time MSI changed Afterburner to make use of RivaTuner. I could be wrong, but just maybe RivaTuner forces HPET as the main timer...
Did you hear the news of "Chris Hook" senior Marketting director at AMD leaving. It possible that the effect of Raju leaving AMD has much more effect then people realize.
It good to see competition out there, it helps the industry stay a live - but I concern that at least for AMD that it will be a short fused.
Chris Hook was not very tech savvy for someone of his position so I am relieved he has left. He started working at ATi around 2000, yet he claimed in his farewell letter that high resolution gaming at the time constituted 320x240. We are all fortunate that the company has shed his weight and his position is now in far more competent hands.
It is interesting that people say so much good things about people while they were working there but when they leave they make them sound like traitors - but I see it differently - for someone to be there so long and such a high position - they maybe seeing writing on the wall.
Big question is where did Chris Hook go - and yes back in early days we did not have 4k or even 1080P. I actually talk to ATI developers during latte 1990's and early 2000. Back then GPU's were stupid
Keep in mind this guy was Marketing, not technical
But to me this is sign that AMD is burning the candle at both ends if you combine it with Raju
Have you seen ALL the big players that was with AMD and ATI when they were really good? They are rejoining, it's a writing on the wall. AMD is a good company to work for again, those who worked there a decade ago was really passionate about their work and company then Hector Ruiz came and it ended with bulldozer, selling off glofo, overpaying for ATI and left when nothing was in any way positive apart from an GPU hardware product stack that was good.
This is an exaggeration when, in reality, it is strictly a case by case basis that such judgement calls should wisely to be made. For example, when Steve Ballmer left Microsoft, it was a positive thing overall for the entire company to be shed of his weight. Usually, these “writing on the wall” conclusions are prematurely drawn by news venues who are desiring to make a hot, trending news item that are not necessarily accurate or fact-based. As to here, in truth, I never held Chris Hook or Raju Koduri in high regard, before, during or after their announced exoduses from AMD—both came across as underqualified and unknowledgeable for their respective management levels.
HStewart is a Intel fan boy. Despite the turnaround for AMD in the CPU space, according to HStewart the sky is always falling for AMD. Intel can do no wrong.
"and yes back in early days we did not have 4k or even 1080P." Yeah, but my first PC when I was 12 years old in 2000 had a mediocre monitor with a 800 x 600 resolution (friends had 1024 and a bit later 1280). Him saying that in 2001, when he joined, 320 x 240 was high res just seems ignorant. Or a crappy joke.
Chris Hook was a marketing guy through and through and was behind some of AMD's worst marketing campaigns in the history of the company. Him leaving is total non-issue in my eyes and potentially even a plus assuming they can replace him with someone that can actually run good marketing. That's always been one of AMD's most glaring weak spots.
I laud with your decision to reflect default settings going forward, since the purpose of these reviews is to give your reader a sense of how these chips compare to each other in various forms of real-world usage.
As to the closing question of how these settings should be reflected to readers, I think the ideal case (read: way more work than I'm actually expecting you to do) would be that you extend the Benchmarking Setup page in future reviews to include mention of any non-default settings you use, with details about which setting you chose, why you set it that way, and, optionally, why someone might want to set it differently, as well as how it might impact them. Of course, that's a LOAD of work, and, frankly, a lot of how it might impact other users in unknown workflows would be speculation, so what you end up doing should likely be less than that. But doing it that way would give us that information if we want it, would tell us how our usage might differ from yours, and, for any of us who don't want that information, would make it easy to skip past.
Would be interesting to see a series of comparisons for the Intel CPU:
No Meltdown, No Spectre, HPET default No Meltdown, No Spectre, HPET forced Meltdown, No Spectre, HPET default Meltdown, No Spectre, HPET forced
To compare to the existing Meltdown, Spectre, HPET default/forced results.
Will be interesting to see just what kind of performance impact Meltdown/Spectre fixes really have.
Obviously, going forward, all benchmarks should be done with full Meltdown/Spectre fixes in place. But it would still be interesting to see the full range of their effects on Intel CPUs.
Yes, I'd like to second this suggestion ;) . No one has done any proper analysis of the Meltdown/Spectre performance on Windows since Intel and AMD released the final microcode mitigations. (i.e post April 1st).
I agree as the timing makes this very curious. One would think this would have popped up before this review. I get this gut feeling the HPET being forced is causing a much greater penalty with the Meltdown and Spectre patches applied.
Thanks to Ryan and Ian for such a deep dive into the matter and for finding out what the issue was... Even though this changes the gaming results a bit, still does not change the fact that the 2700x is a very very competent 4k gamer cpu.
But to be honest, the 8700K's advantage when totally CPU limited isn't all that fantastic though either. Sure, there are still a handful of titles that put up notable 10-15% advantages, most are now well in the realm of 0-10%, with many titles now in a near dead heat which compared to the Ryzen 7 vs Kaby Lake launch situation is absolutely nuts. Hell, even when comparing the 1st Gen chips today vs then; the gaps have all shrunk dramatically with no changes in hardware and this slow & steady trend shows no signs of petering out (Zen in particular is an arch design extraordinarily ripe for software level optimizations). Whereas there were a good number of build/use scenerios where Intel was the obviously superior option vs 1st Gen Ryzen, with how much the gap has narrowed those have now shrunk into a tiny handful of rather bizarre niches.
These being those first & foremost gamers whom use a 1080p 144/240Hz monitor with at least a GTX 1080/Vega 64. For most everyone with more realistic setups like 1080p 60/75Hz with a mid-range card or a high end card paired with 1440p 60/144Hz (or 4K 60Hz), the Intel chip is going to have all of no gaming performance advantage whatsoever, while being slower to a crap ton slower than Ryzen 2 in any sort of multi-tasking scenerio, or decently threaded workload(s). And unlike Ryzen's notable width advantage, Intel's in general single-thread perf is most often near impossible to notice without both systems side by side and a stopwatch in hand, while running a notoriously single-thread heavy load like some serious Photoshop (both are already so fast on a per-core basis that you pretty much deliberately have to seek out situations where there'll be a noticeable difference, whereas AMD's extra cores/threads & superior SMT becomes readily apparent as soon as you start opening & running more and more things concurrently. (All modern OS' are capable of scaling to as many cores/threads as you can find them).
Just my 2 cents at least. While the i7-8700K was quite compelling for a good number of use-cases vs Ryzen 1, it just.... well isn't vs Ryzen 2.
The thing is, any gamer (read: gamer!) looking to get a 2700x or an 8700k is very likely to be pairing it with at least a GTX 1070 and more than likely either a 1080/144, a 1444/60, or a 1440/144 monitor. You don't generally spend $330-$350/ £300+ on a CPU as a gamer unless you have sufficient pixel-pushing hardware to match with it. Those who are still on 1080/60 would be much more inclined to get more 'budget' options, such as a Ryzen 1400-1600, or an 8350k-8400.
There is STILL an advantage at 1440p, which these results do not show. At 4k, yes, the bottleneck becomes almost entirely the GPU, as we're not currently at the stage where that resolution is realistically doable for the majority.
Also, as a gamer, you shouldn't neglect the single-threaded scenario. There are a few games who benefit from extra cores and threads sure, but if you pick the most played games in the world, you'll come to see that the only thing they appreciate is clock speed and single (occasionally dual) threaded workloads. League of Legends, World of Warcraft, Fortnite, CS:GO etc etc.
The games that are played by more people globally than any other, will see a much better time being played on a Coffee Lake CPU compared to a Ryzen.
You do lose the extra productivity, you won't be able to stream at 10mbit (Twitch is capped to 6 so its fine), but you Will certainly have improvements when you're playing the game for yourself.
Don't get me wrong here; I agree that Ryzen 2 vs Coffee Lake is a lot more balanced and much closer in comparison than anything in the past decade in terms of Intel vs AMD, but to say that gamers will see "no performance advantage whatsoever" going with an Intel chip is a little too farfetched.
Is there any other kind? Either you're at the budget end where everything is GPU limited or at the high-end where not spending a decent amount on a monitor to go with your £500 GPU is a crying shame.
There's a niche where Intel has a clear win, and that's people running 240Hz 1080p rigs. For most folks with the money to spend, 2560x1440 (or an ultra-wide equivalent) @ 144hz is where it's at for the ideal compromise between picture quality, smoothness and cost. There are a lot of monitors hitting those specs right now.
>93% of Steam gamers main display is at 1080p or lower.
If the new review suit split what GPUs were run at what resolutions, dropping 1080p from the high end card section might be reasonable. OTOH with 240hz 1080p screens a thing there's still an enthusiast market for 1080p combined with a flagship GPU.
IndianaKrom, are you aware that using high(er) frequency monitors retrains your brain's vision system so that you become tuned to that higher refresh rate? New Scientist had an article about this recently; gamers who use high frequency monitors can't use normal monitors anymore, even if previously they would not have found 60Hz bothersome at all. In other words, you're chasing goalposts that will simply keep moving by virtue of using ever higher refresh rates. I mean blimey, 240Hz is higher than the typical "analogue" vision refresh of a bird. :D
IMO these high frequency monitors are bad for gaming in general, because they're changing product review conclusion via authors accepting that huge fps numbers are normal (even though the audience that would care is minimal). Meanwhile, game devs are not going to create significantly more complex worlds if it risks new titles showing more typical frame rates in the 30s to 80s as authors would then refer to that as slow, perhaps criticise the 3D engine, moan that gamers with HF monitors will be disappointed, and I doubt GPU vendors would like it either. We're creating a marketing catch22 with all this, doubly so as VR imposes some similar pressures.
I don't mind FPS fans wanting HF monitors in order to be on the cutting edge of competitiveness, but it shouldn't mean reviews become biased towards that particular market in the way they discuss the data (especially at 1080p), and it's bad if it's having a detrimental effect on new game development (I could be wrong about the latter btw, but I strongly suspect it's true from all I've read and heard).
We need a sanity check with frame rates in GPU reviews: if a game is doing more than 80 or 90fps at 1080p, then the conclusion emphasis should be that said GPU is more than enough for most users at that resolution; if it's well over 100fps then it's overkill. Just look at the way 8700K 1080p results are described in recent reviews, much is made of differences between various CPUs when the frame rates are already enormous. Competitive FPS gamers with HF monitors might care, but for the vast majority of gamers the differences are meaningless.
So the real question is if someone first exposed to 100/120/144 Hz immediately squirms in delight, or if they only vomit in disgust months later when they see a 60 Hz screen again. That should be the decider.
1080p is popular in the Steam survey where, incidentally, so is low-end GPU and CPU hardware. Most of those displays are 60hz and an awful lot of them are in laptops. Pointing at the Steam surveys to indicate where high-end CPU reviews should focus their stats is misguided.
I'm still not certain that testing CPUs in a way that artificially amplifies their differences in a non-CPU-reliant workload is really the way to go.
You can just ignore the 1080p benchmarks if you don't think they're meaningful. As DanNeely said, 93% of surveyed Steam users are 1080p or lower, so I'd be shocked if more than a handful of review sites get rid of it.
Steam benchmarks are meaningless unless you can filter based on several factors. OEM vs Custom builds, country, etc. That is why I consider anybody that brings up steam survey users to not know what they are talking about. The US has around 300 million people in the world, we spend the highest amount on PC hardware in the world, yet China has billions of people, and they spend the least. Steam groups everyone together. Second, OEM systems force HPET to on. I just checked my laptop running an i7 mobile 6700HK or whatever, and HPET was on in both the BIOS as well as in Windows. So no, you can't make assumptions. Custom builders typically have HPET off, and OEM builders has HPET on. If I were AT, I'd force HPET on. Not to screw one company over vs another, but to force them to improve their HPET implmentations.
One does have to question the usefulness of a 4k benchmark in a CPU review, other than "yep, its still GPU limited". Whole bunch of graphs showing +/- 3%, content to pad the ads with I guess...
I imagine the intent is to let people who want to be 4K gamers right know that it doesn't matter what CPU they get. Or just to find interesting anomolies. You don't know what you don't test.
This makes no sense. I use a GTX 1070 and game on 1080p, let's forget about the reasons why for now, I'm not alone in this. Most gamers, even most of those with slightly above average cards use 1080p monitors. HPET is not an issue, very few people force HPET on. There is a reason only AnandTech got these numbers.
I'm in a similar situation but with a 1080. I'm playing at 1920*1200 @ 60Hz only because I've got other financial priorities keeping me from buying a new monitor.
You are also alone. You cannot rely on steam survey numbers to claim supremacy, as I mentioned in an earlier comment, a steam survey can be nothing more than a dell machine with a GTX 1070 running 1080p, or it can be a Threadripper machine running dual 1080tis @ 1080p. In my situation, I game at 1440p on a 1950x and a 1080ti running 1440p. HPET off. does that make me a minority? No it does not. There is no way to measure per capita spending on PC hardware due to the second hand market, and different demands in different countries. My thought is to force HPET on for all, and may the best company win....just like with Anti Aliasing in gaming.
Maybe it should be made clear if its on/forced on and so forth
This is anecdotal
But my testing has shown in a handful of games that HPET is detrimental to gaming (7600K at 5.2Ghz)
Fps where the same, but HPET introduced a stutter the whole time
Now I also could've sworn HPET was default off in the UEFI on my Z170
Maybe that's something to look into as well How is HPET set at default in the UEFI If its default on with Z370 then it should be made clear it's on Its default off for older/newer chips it should be made clear me thinks
The default would be for it to be enabled, since it is a standard feature of the platform nowadays. However, forcing it to be used instead of the CPU's TSC (when it is also available) is not standard in most modern operating systems where the TSC is known to be reliable, or can be made so.
A stutter is a defect. I favor neither Intel nor AMD in this case, however IMHO there can be only 2 outcomes: a) Intel fixes it's HPET implementation or b) Microsoft removes HPET altogether. Only then will we receive the true numbers.
" however it is clear that there is still margin that benefits Intel at the most popular resolutions, such as 1080p."
That's a false and highly misleading statement, it's not about the resolution it is about an over-dimensioned GPU for a given resolution so , easiest way to put it, high FPS gaming. 90% will game at 1080p with a 1060 not a 1080. Marketing might have moved rich children from 30-60FPS to 120FPS but people are not made out of money and you know very well how limited high end GPU volumes are.
For now you should test with and without HPET at least for a few results and highlight the HPET impact.. One thing I did not notice being addressed after flying over the article is the accuracy of the results with HPET disabled. How certain are you that the results are not way off to favor Intel now?
The only one misleading and false statement is that 90% will play at 1080p with a 1060.
Remember, in the future, 1160 will be probably more powerful than 1080, 1260 than 1280 and so on. The bottleneck is still here, not gonna disappear, will get only bigger with more powerfull cards.
Regardless, how certain are you that the results are not way off to favour AMD now?
Games get more demanding. I'm convinced that at some point 1080p will become obsolete, but we are not there yet. For me 1080p maxed out (sometimes with DSR enabled) looks good enough and ensures that I get the smoothness that is important to me.
Where's the evidence games are becoming more demanding? If that were true, typical frame rate spreads in reviews would not be going through the roof. It's been a very long time since any GPU review article talked about new visual features to enable more complex and immersive worlds. These days, all the talk is about performance and resolution support, not fidelity.
People buy GPUs by targeting the FPS they need inside a budget and sane people do not buy more than they need. And ofc as someone else pointed out, games evolve too, otherwise we would not need better GPUs. Remember that GPUs have been around for decades, we know how things go.
Benchmarks should not be done on a 1060. The purpose of a CPU benchmark is to measure CPU performance. IMO a 1080ti at MINIMUM should be used to elimininate GPU bottlenecks. There are some games out there that still bottleneck at 1080p.
You are damn wrong. Sure you can see CPU bottleneck... however, can you? Now with HPET put into light, you can alter results dramatically for Intel, however is HPET a default function for the OS?
Basically, you are telling me that benchmarks should have HEPT off, a configuration that is supposed to be set as default, just because we can see which architecture is better in a non conventional use?
So what is the value of those precious 1080p benchmarks if they don't represent the configuration the typical end user is going to use the product for in its intended use?
It is coming back to the USE CASE.
If a budget user buy an RX 560, CPU choice at 1080p won't matter. If a mid range user buy an RX 580/1060 GTX, CPU choice at 1080p won't matter. If a high user buy a 1080 GTX/Vega 64, CPU choice at 1080p @ 144 Hz will barely matter. If an enthusiasm user buy a 1080 TI, CPU choice will matter @ 144 Hz.
And now... what happens with HPET in the picture? How can you accurately render results without biasing yourself anymore?
One thing for sure, Intel needs to fix their stuff.
Thank you for the analysis. Can you somehow verify that very large variations (RoTR 1-2-3, Civ6) of performance on i7-8700K with HPET not forced are real? Is it possible that the reported FPS are wrongly calculated when using non-HPET timer? Can you also get a comment from the developers of those games about this result? 45,76 and 69% performance difference does not seem normal.
"Thank you for the analysis. Can you somehow verify that very large variations (RoTR 1-2-3, Civ6) of performance on i7-8700K with HPET not forced are real?"
Can and done. Using the Timers application we can compare the outputs of all of the timers, and ignoring the ancient 1KHz RTC timer, all of the important timers show no drift versus HPET. So there isn't a loss of accuracy affecting the calclation of frame rates.
IMO if it has that dramatic an effect on things I'd like to see continued testing and coverage of the issue. As pointed out it's not exactly hard to end up with HPET turned on. I'm pretty sure I've launched the Ryzen master at least once on my system so it should be forced on. I also use my MB mfgs fan controller software which may or may not do the same thing.
Testing every game on every system with HPET off and then on may not be practical but I'd still like to see tests for CiV6 and/or RoTR as those seem to be the biggest outliers until the impact becomes minimal or statistically meaningless.
It's not a bug. You literally have no concept of what you are talking about, yet you are commenting on an article that explains it in greater detail than ever put to page. HPET has to be used in extreme overclocking scenarios as windows 8/10 create variances in those situations. Ryan misinterpreted that it needed to be on always, and thus this situation was born.
HPET isn't a but, it's a setting in bios that forces clock synchronization on the faulty Windows 10/8 system that can give incorrect data (ie: benchmark times ect.)
Good to see things cleared up on this. My question is this I under stand that on the AMD's systems turn HPET to forced on from Ryzen Master needing it am I right on that. So that explains why it was turned on for the AMD systems but if it was not at default for the Intel systems as well how or what changed it to forced on the Intel systems? Was it changed in the Intel bios to enabled which then forced the OS to use the forced on option. My other concern is that if it eats away at so much performance why havn't Intel and AMD come up with better ways to deal with this issue or is it kinda like a newer problem because of Spectre/Meltdown patches and micro code updates on the Intel platform and HPET in forced mode kills performance because of that.
They've forced HPET on in benchmarks via script (as I understand from article) and for AMD it is irrelevant be it on or off (also explained in the article).
So basically the moral of the story here is leave things as the hardware vendor intended or in default settings and everything should be fine. This does raise about a million more questions on how reviewers should or even need to change the way they setup the gear for reviewing etc etc. It also confirms well this and probably a lot of other variables in the hardware that can skew results one way or another it answers the question or at least part of it as to why the same hardware performs so differently from review to review. Just for the record I am not saying Anandtech in any way tried to skew the numbers in anyway I am very sure that is not the case here.
Well, if you have been enforcing HPET on for all those years, it pretty much means that all the tests on this site are not valid and not representative at all.
HPET is widly known as the reason causing several perfomance issues /stuttering, fps drops on cpus with more cores/ but I never personally believed it because there was no benchmarks to support it only some geeks on forums posting their latency screens with HPET on/off and anecdotal evidence from the people who allegedly gained/lost fps by turning it on/off.
The point is..The benchmarks here are not run on the same stick of RAMS /frequencies, timings/ but the highest official supported frequency is used to simulate what the platform is capable of.
So why turning/enforcing something on by default if it could potentionally cause performance regression and makes your avg, min, max, 99th percentile absolutelly skewed?
I'm pretty confused. In these cases, it's the denominator (time) that's changing that affects the resultant performance assessment right? The raw performance (numerator) is unchanged.
e.g. FPS = frames / time. Frames remain the same, but time is measured differently.
It stands to reason that there is no actual performance difference, just an inconsistency in how time is measured. For that matter, we're not even sure whether either system is accurately timing itself.
IMO we shouldn't be trusting the benchmarked system's timer at all. Run an ntp server elsewhere on the network and get time from that before and after each benchmark. Likewise all gaming results really should go through an external tool like FCAT.
AFAIK it's only in the PC industry that benchmarks trust the system being measured to do book/time keeping for itself, which is kinda nuts considering the system clock will be going from base to boost and each core will be running at different frequencies, and the whole system is subject to thermal swings.
Agreed, using the system to basically audit itself, is kind of a flaw in the design of testing.
However, easily applying a third party time index isn't so easy? I guess you could film each game's performance on the monitor with a high speed camera, but parsing that data would be nightmarish at best.
Easiest way would be to use an external computers (such as a web time server) timestamp before the test, and when it finishes, with the variation being the average ping time to the server. I guess. But that changes the way testing and benchmarks are done.
The solution already exists. DigitalFoundry does it. They capture the output video with an external device and then run it through a special software that is able to determine frame times and produce a frame rate graph. This is how they manage to determine exact frame rates for consoles even.
FCAT testing. Super expensive to do right (requires beefy enough hardware on both the dedicated capture rig & it's actual video capture card itself such that the video capture of whatever's being tested doesn't drop a single frame [as the capture rig isn't what's being tested/analyized, it needs to be as close to perfect frame-pacing/capture as possible]), but suuuuper freaking awesome haha.
I'm pretty sure Digital Foundries FCAT analysis software was even designed in-house. Lol Richard's steezy FCAT testing has become like his calling card by this point.
FCAT testing. Super expensive to do right (requires beefy enough hardware on both the dedicated capture rig & it's actual video capture card itself such that the video capture of whatever's being tested doesn't drop a single frame [as the capture rig isn't what's being tested/analyized, it needs to be as close to perfect frame-pacing/capture as possible]), but suuuuper freaking awesome haha.
I'm pretty sure Digital Foundries FCAT analysis software was even designed in-house. Lol Richard's steezy FCAT testing has become like his calling card by this point.
If HPET results in a system call, it is both. The Meltdown and Spectre mitigations make ordinary system calls *much* more expensive, and AMD's platform isn't mitigating those yet.
More stringent testing of HPET needs to be done. It could be the case that everything is performing the same in all tests but the results are reporting the wrong numbers (which I would assume would be the case for the HPET not forced results). But forcing the HPET when not expected could be causing other timer related issues in the programming that could result in loss of performance.
Yeah, basically. It's the time portion that is problematic. It's been the case since reviewers were reviewers and using FPS.
A more accurate measure would be frames rendered for the same, identical test, for each system. Most games do not provide such information or tests, though.
No, I believe he was saying that if you aren't messing around with extreme OC and altering base clocks etc., the time portion is always accurate. The raw performance does change from the CPU overhead of HPET in Intel systems, by a lot in some cases.
Not just extreme OC; anything that changes the clock speed, for example the CPU down clocking at idle, will change the rate of TSC relative to "real time". HPET exists to be the arbiter of "real time" unmoored from CPU frequency.
When you are examine CPU performance, 1440p is usually bottlenecked by the GPU, so the CPUs are all waiting around for the GPU and don't get to really show who's faster.
When you run at 1080p, the GPU has no problem handling that, so CPUs are no longer waiting around for the GPU. More responsive CPU's keep up with the GPU to provide super high framerates. Slower CPUs will drag the system down, lowering framerates especially in CPU-intensive situations like tracking lots of players or mobs at once.
You didnt ask the most important people, Microsoft the developers of Windows. They state the default timers shouldnt be fiffled with unless you are debugging timer problems or have a specific need to force a certian timer. TSC is the best performing timer for modern processors.
I remember a couple of years back when I followed a silly guide on the net to force HPET in windows and later discovered it was the blame for weird stutters I had in games.
If AMD's own software is forcing HPET in the OS and especially they not telling the end user what they doing then thats very irresponsible.
A really interesting article. I disabled HPET at motherboard level a few months ago in the pursuit of lower usb latency, and noticed it also made game framerates slightly smoother. (i5 3470, Z77 chipset, win10 x64)
I would love to see your results and hear your thoughts with regard to the compilation benchmark. As a developer fighting long build times, this is extremely relevant to my current work.
I can't understand why you portray HPET as a magical highest precision timer? TSC is faster and more accurate when it has proper implementation (modern CPUs). Would be really useful to test how overclocking modern CPUs affect TSC and maybe report bugs to the CPU manufacturers if it still does.
TSC isn't only the highest resolution timer – it's also the cheapest one in terms of latency. It has only 2 major problems: 1) On some really old CPUs it's tied to actual CPU clock and changes according to frequency change. 2) It's tied to system base clock and changes with it.
But since base clock overclocking is dead you can pretty much consider TSC as a stable timer now.
There's also the inconvenience that the TSC is a per-core timer, and it's hard to get the TSCs exactly synchronized between cores, so software that needs really high resolution timing also needs to worry about thread pinning.
The power management effects were fixed way back with Nehalem. With even desktop CPUs doing clock speed changes all the time (eg. Turbo Boost), TSC would be useless if it didn't account for any of that. Nowadays, the TSC is only vulnerable to distortion from unusual sources of clock speed changes, like BCLK overclocking or drift in the clock generator.
Plenty of words and nothing about another solution of timing problems: drop always-in-beta Win10 and test on stable Win7.
You write that you care about 'gamers' and 'default configuration' and ignore that Win7 share is almost 2x the Win10 one (according to Steam). In enterprise there is even less love for Win10.
Plenty of mbd vendors support Win7 with Ryzen, whatever the official support is supposed to be. Most mbd vendors are not so dumb as to lock out the largest share of the market.
An interesting article that talks more about the issue. They look to even have a benchmark to show the impact. The video is also very interesting. The more I research this problem the more i see its been know for a very long time now.
Very thorough article. I like to point out a few things though, that may add some information to this.
AMD and especially Intel have swept this problem under the rug since the launch of Skylake X. I noticed this problem while benching for a review and initially thought that my OS installation was the cause. After some testing I finally found the same root of evil as Ian did. At that time I made a video and called it the "Intel X299 HPET" bug (can't post a link, it was already mentioned in the comments here).
I tried to talk to PR and engineers at Intel for quite a while and they heard about my bug report but refused to comment. Time went by and Threadripper and Coffee Lake were born, both inheriting the same slow HPET QPC timer calls. I informed Intel repeatedly, still no comment.
During that time I wrote the following benchmark that sheds some light on the whole QPC and timer business on Windows. It shows your Windows timer configuration, gives recommendations for precision and performance, provides a way to bench your QPC timer in a synthetic and a game test and gives easy access to change TSC to HPET and vice versa.
As I am not able to post a link here, please search for "TimerBench", you will be able to download it.
I am also the author of GPUPI, one of those benchmarks for overclockers mentioned in the article that enforced HPET timers for secure timing a while back. Since discovering the HPET bug I have pulled back on this restriction. Since Skylake HPET is no longer necessary to avoid BCLK skewing, iTSC is just fine. AMD is still affected though, possibly Ryzen 2 as well (Threadripper and Ryzen 1 was).
Might be because English has its roots in Germanic languages. :D Old English sounds a lot like common words in Dutch, and there's a region in Germany where the way German is spoken can sound to other Germans to be rather like English (according to a German guy I know). It's all those pesky Saxons, Angles, etc. :D
Thank you _mat! Hopefully your comment gets attention here at Anandtech, and in turn, this article and your work get some attention from Intel. On the AMD side, it sounds like enabling HPET has only a small penalty in most cases, but those differences on the Intel side are very troubling. At the very least we should be forewarned!
First of all, I wanted to thank you for an extreme effort you put in your reviews and analysis.
My thinking on the subject is that if you disable HPET in OS, this may make your numbers and review conclusion be irrelevant to the real world scenarios. As you have said, many programs (like video streaming, monitoring/overclocking, and potentially motherboard software (not to say about Ryzen Master)) require HPET to be enabled in OS and they will force it during the installation process and most likely won’t inform you about this. That means, that if you’ve installed all the software you going to use on fresh OS (and/or fresh PC), it is very possible that some of that software will have HPET forced and you won’t know about it.
To my mind, most of people, who read CPU reviews, are enthusiasts and/or those, who want to make a decision on CPU purchase by themselves. The majority of people will just buy PC based on others’ opinion or consultant’s advise. So those, for whom 10% difference in performance matters, and/or those, who bought expensive GPU like 1080/1080ti, will probably use monitoring software like HWinfo or Afterburner. That means, that HPET will be forced on their systems. That means, that they will have real world numbers close to what you’ve got in the original Ryzen 2000 review.
Another thing is that by disabling HPET in OS, while doing tests for a new review, you will hide the problem with it on Intel systems. People will not consider this as a potential performance hit or disadvantage of Intel platform in general.
Moreover, I suspect that in future more programs and, probably, next-gen games will require HPET (in order to better synchronize even more threads). Since most of people buy CPU for more than one year, they will have potentially worse experience with Intel CPUs in future, compared to AMD CPUs.
So it looks more logical to me to test CPUs with HPET forced (for all software), but have additional tests with HPET disabled for just games in order to have games tested with HPET both on and off. That will emphasize the problem. For me this is the same reason why it is important to test hardware with all Smeltdown patches and BIOS updates installed.
If it doesn’t use HPET, then how it can be precise? My conclusion doesn’t follow any assumption. AT’ve said that any software can force HPET. Maybe Afterburner doesn’t do that right now. But it is a great chance that it so right now with other software or will be with Afterburner in future.
"However, it sadly appears that reality diverges from theory – sometimes extensively so."
Please don't do this. There is no theory that states that HPET won't affect benchmarks, but rather an expectation that they will not. That is a different thing. I understand it's a common colloquialism to use "theory" in this way, but I also expect Anandtech to exceed common standards.
How does HPET affect DX11 or DX12 bench results? I find it somehow odd that an obscure BIOS or O/S switch can be so controversial.
I would suggest some research regarding HPET settings vis-a-vis DX12 or DX11 and even Vulkan. Since Windows 7 and 8 is still in widespread use how hardware responds to the two major API's is relevant.
However there is also one system stress benchmark that while being in widespread use in European on-line media is almost completely ignored in the US with the occasional convenient exception of Toms hardware. That benchmark is the Chessbase Fritzmark.
Best job Anadtech! this why I read your articled and regard you highest on the net: what you are doing is science. Many, many thanks for that! Important question: will you be using new AGESA 1.0.0.2a BIOS for ryzen 2xxx? It shows positive performance impact: https://www.phoronix.com/scan.php?page=article&...
Kudos for identifying and correcting the fault in your test methodology, and especially for publishing your findings. I was about to write AT off with those sketchy gaming results but I can see there is no need, you have redeemed yourselves! Anand would be proud ;)
A fast one? Nobody turned on HPET during benchmarking for at least a decade. They've started doing it because AMD said so cause there were some discrepencies going on when posting results on HWbench
Phoronix reports that there is an update AGESA 1.0.0.2a for the ASUS X470 motherboard he has that brings another 6% performance increase with seemingly everything on linux and the ryzen 7 2700X.
"however the most gains were limited to specific titles at the smaller resolutions, which would be important for any user relying on fast frame rates at lower resolutions."
Uhh, isn't that negated by more stuttering without HPET? Or does having it "available" provide the same real-world smooth gameplay as having it forced on, but somehow also boost benchmarks?
You shouldn't seeing stuttering in normal scenarios. If anythng, it's forcing the use of HPET that could lead to stuttering, since it's a relatively expensive system call to make.
So... now this has me thinking... which results are accurate. Are the new findings used by intel to show an artificial boost on benchmarks... I just cant grasp this much of a performance difference just by hpet bsing forced on... it seems to be just the reporting is skewed.... which sounds very pro intel!
As amazing as it looks, the new results are accurate. Forcing the use of HPET really does have a sizable performance impact in some of these games. Particularly, I suspect, any game that likes to call on OS for timers a lot.
While I know that running your tests without HPET forced is most representative for most people, would it be unreasonable to ask that the results with HPET forced be presented moving forward?
For instance, I use HPET timer for collecting performance data for software that I write. If enabling HPET can cause a 10-30% drop in performance, it makes a huge difference to me. That's enough of a difference to throw off the measurements of parallel fine-grained operations by a very substantial margin. In my case, that would result in improperly tuned code.
Based on your results with Ryzen 2, there is a much more significant difference between the 2700X and the 8700K than most reviews suggest for my application. That's an important insight from my perspective. If the pattern holds for the HEDT chips or, *shudder*, Epyc and Xeon, there is a lot to lose by not considering the effects of HPET. In those spaces, it could mean missing out on several thousand dollars worth of performance per CPU by choosing the wrong architecture.
This issue shouldn't matter at all in the server space, because one of the only reasons to force the HPET to be used as the primary timer is to get accurate timing when overclocking (or get results at stock speed that can be fairly compared against accurate overclocked results). Servers don't get overclocked, so they can rely on the TSC for most of their timing needs and not have to incur the HPET overhead on every time check. (The HPET will still get used for some things, but it doesn't have to be the only time source when the TSC is trustworthy.)
The problem is just not for overclocked CPUs. Also what if you don't know if HPET is being forced? Who knows to check for that? What software can force it on?
Is there really no adverse effects on defaulting to not forcing HPET? Imho, calculation is not always on accuracy but also on the timely manner. In measurement, benchmarking, or maybe in controlling it would matter a lot. On the other hand in gaming I don't know if it's correct understanding but it could cause untimely frames, or part of it, ghosting, or artefacts, etc.
And when MS changes HPET default to forced if detected ... then you are screwed again and have to retest ...
You are at a dead-end actually. You are switching to HPET off (effectively) because it highly favors Intel in a few benchmarks yet AMD is mostly unaffected. Will you change that if the tables turn in the future again ?
Come on ... it is more about consistency. Your HPET forced mode definitely highlighted an issue with Intel chips yet instead of hitting on Intel about the issue you are changing your settings ...
He's entirely right. HPET is an issue on Intel shoulder as of now. How can we be sure that without HPET on, Intel benchmarks are accurate?
Also, you cannot turn off something in the BIOS that is supposed to be on, as mention by Intel, just because you want to give the crown to one manufacturer or the other. By the way, we are talking about 1080p benchmarks with a 1080 GTX. 60 Hz is irrelevant since a RX 580 can render it, it leaves only 1080p @ 144Hz.
Also, what about new games since these results seem to be linked to old games?
You don't get 40% more performance by switching something on and off in the BIOS. If it does, than something needs to be fixed.
Well, I once disable my boot drive in the BIOS and experienced a 100% slow down. I'm tempted to agree with you and feel that something needs to be fixed so that my system works without a boot drive, but other people don't seem to agree. Opinions...
Then there is the time when I disabled my NVDIA GPU in the BIOS of my laptop. Massive performance drop in games... Not good. Sad. Needs fixing.
What would be more accurate than your statement is something like:
"We found that our previous data contained some pretty significant inaccuracies and to be thorough, we're re-testing and improving our testing methodologies as a whole and explaining as such at length for the sake of transparency.Thanks for being patient with us."
What I don't understand is how the original gaming data ended up being published at all.
The faulty results that were published were entirely at odds with the data supplied by AMD itself (which we've all seen - even prior to the reviews dropping, if you've been following the leaks). Surely if Ryzen 2000 was so much faster than Coffee Lake, they would have been shouting this from the rooftops - gaming is, after all, one of the few weaknesses Ryzen has. AMD's no-doubt massaged results were a ton more accurate than Anandtech's - madness.
Not only that but the Anand results showed a massive increase over Ryzen 1000 - which simply isn't feasible for what is effectively a mild refresh. Meanwhile, the results also showed Ryzen 5 handily beating an 8700K... surely you must have realised that something wasn't right at that point? Utterly baffling and calls into question your approach generally.
This is a major hit to credibility. I mean, if you're going to publish a CPU 'deep dive', surely you need to actually analyse the data and be ready to question your results rather than just hitting the publish button?
It's extremely disappointing. The original benchmarks should not have been published. But since that happened, they should have been removed or at the very least, there should have been a far more obvious (and stronger) disclaimer.
Even their mea culpa isn't very strong. The article is well written, but should have started with: "We made a mistake". I dislike the misleading statement that other publications did not all install the latest patches. Some publications did, others gave a good reason why they didn't.
Mistakes happen, but after days of showing incorrect benchmarks people started speculating and even now there are still people spreading misinformation based on the AnandTech article.
This is an editorial problem. I feel very ambivalent about at this. AnandTech supplied some really interesting information, but if they can't redact wrong information in timely fashion, then AnandTech is not trustworthy.
I have reached my personal conclusion, but it is with regret. A few extra lines in the original version of the review would have made all the difference.
"This is an editorial problem. I feel very ambivalent about at this. AnandTech supplied some really interesting information, but if they can't redact wrong information in timely fashion, then AnandTech is not trustworthy."
Ultimately that sits with me. Ian was able to repeat the earlier numbers again and again and again. Which indicated, at least initially, that it wasn't a mere fluke with our testing. The numbers we were getting were correct for the scenario we were testing. It just wasn't the scenario we thought we were testing.
It wasn't until he found the HPET connection a couple of days later that we even began to understand what was going on. And a couple of days later still until we had enough data to confirm our hypothesis.
As far as the original review goes, while it's been updated a few times since then, once we decided to audit our results, we did post a note on several pages about it. Which I hope was plainly visible. But if not, then that is very good feedback to have.
It's too bad you're so upset by this. Their first results, while not representative of peak performance, are indeed valid for the way they were achieved. They are not "wrong," so much as parallel. I hope they keep the results. As this review is the first one to have this issue, I'm glad to see they discovered the cause of the performance scores in such a short time. Keep up the good work, Ian!
All that's really missing is the admission that tinkering with the HPET settings crippled Intel performance, and gave AMD the lead which they would not have without having done so...; correct? :) (Nice attempted deflection with the 'someone at Intel told us it would not matter' spiel....!) :)
As a current Intel user, I'm more worried about what software is using this timer and how often I have been exposed to this sort of non-stellar performance.
My next upgrade will take this into consideration. Best of luck to you all.
Seems like the HPET bug finally gets the right momentum. Great!
I finally wrote an English article about the HPET bug. There are a lot of misconceptions going on right now about that the bug is and what it's not. It also explains what the HPET timer problem was once, why it matters again today and which platforms are affected. Sadly this shows the way Intel handles bug reports like this as well.
Be sure to try TimerBench as well, my Windows timer benchmark! It has already been posted here so I won't bother you again (and the download is in the article).
You raised an important, IMHO, point: Do game engines use HEPT information in their logs, their AI calculations, their gfx calculations, and whatever else they do inside?
Why this is important in my opinion, is because what will the users' experience be out of the box? They build their PC, they install Windows, they install the game, and then... do they disable or enable HEPT before they play? No, they run the darn game!
We trust you to give us review results that would typically represent what we will get. Non-technical users also trust you to give them review results that would typically represent what they will get, no fiddling around because they don't know how and aren't interested to do so. Please take this comment into account when deciding if you're going to be flipping HEPT switches with every game on both CPU brands.
______________________________ And hey, I didn't see it, but did you do any comparisons on if GPU maker makes a difference to the HEPT impact on CPU maker?
I think you will see alot of websites testing these combinations and re-validating their results. How do we trust any benchmarks now? Going to be some fun reading in the coming weeks.
"Please take this comment into account when deciding if you're going to be flipping HEPT switches with every game on both CPU brands."
Thankfully, we have no need to flip any switches for HPET. The new testing protocol is that we're sticking with the default OS settings. Which means HPET is available to the OS, but the system isn't forced to use it over all other timers.
"And hey, I didn't see it, but did you do any comparisons on if GPU maker makes a difference to the HEPT impact on CPU maker?" We've done a couple of tests internally. Right now it doesn't look like it makes a difference. Not that we'd expect it to. The impact of HPET is to the CPU.
Even before the patches, using the HPET timer causes severe system overhead. This is a known issue that is exacerbated slightly by the patches, but there isn't a massive increase in overhead. AnandTech should post HPET overhead before and after the patches. You will find the impact is much the same.
Great illustration of the phrase that "it's better not to know than know something which isn't so".
Standard 1kHz RTC is good enough for all real performance measurement where measured tasks run for at least a second or two (otherwise such performance just does not matter in the PC context). Multiple measurement, plus elimination of false precision from averaging the results would eliminate all errors significant for the task.
When you have to change default system configuration to run the tests, the tests reflect these non-default configurations nobody is running at home or at work, and as such simply irrelevant.
I don't understand how using HPET on Intel could have such a drastic effect. Just the fact that it is available slows the system down? How? A benchmark only needs to access this timer once at the beginning and once at the end. There is no need for incessant polling of the clock. Is the only way to guarantee that you are using it to force it on for the whole system? What do these differences look like in other OSes? There are way too many questions unanswered here.
Is it not more likely that using non-HPET timers allows the platform to essentially create it's own definition for what constitutes "1 second"? Wouldn't using a timer based on the core tend to stretch out the definition of "1 second" over a longer period if the core becomes heavily taxed or heated?
These systems need to be tested with a common clock. Whether that is some specialized pcie device, or a network clock, or a new motherboard standard that offers special pins to an external clock source, or whatever, is to be determined elsewhere. All boards need to be using the same clock for testing.
I don't understand how using HPET on Intel could have such a drastic effect. Just the fact that it is available slows the system down?
It's not that it's available is the problem. The issue is that the OS is forced to use it for all timer calls.
"How?"
Relative to the other timers, such as QPC, HPET is a very, very expensive timer to check. It requires going to the OS kernel and the kernel in turn going to the chipset, which is quite slow and time-consuming compared to any other timer check.
"A benchmark only needs to access this timer once at the beginning and once at the end. There is no need for incessant polling of the clock."
Games have internal physics simulations and such. Which require a timer to see how much time has elapsed since the last step of the simulation. So the timer can actually end up being checked quite frequently.
"Is the only way to guarantee that you are using it to force it on for the whole system?"
As a user, generally speaking: yes. Otherwise a program will use the timer the developer has programmed it to use.
"Wouldn't using a timer based on the core tend to stretch out the definition of "1 second" over a longer period if the core becomes heavily taxed or heated?"
No. Modern Invariant timers are very good about keeping accurate time, and are very cheap to access.
"Games have internal physics simulations and such. Which require a timer to see how much time has elapsed since the last step of the simulation. So the timer can actually end up being checked quite frequently."
Would it be too difficult to set up a profiler session and count how many times is HPET called and eventually even from where?
To produce such an impact it must be a load of calls and I still cannot imagine why so many.
Next, why Intel suffers so much by forced HPET compared to new AMD?
It's noteworthy that HPET use at default Windows settings is a black box, aka Windows decides whether to use it or not unless software makes an effort to get more control over it. Windows decision to use one or the other time depends on hardware platform, Windows revisions (even small updates) and even the mixture of software you are currently running.
This also means that the HPET "bug" reported here and on Overclocker.at can hit everyone without them even knowing when and why. Some people prefer to disable the HPET completely, albeit I am not a fan of this "solution". Instead I would very much expect Intel to get a hold of the situation and fix the issues their hardware is experiencing when HPET is used.
Again, the default Windows behavior is not to *not* use HPET, but only to make seldom use of it if applicable. Seldom/applicable use issues are still issues.
Going to see if, perhaps, HPET timings are even more granular on "server"-class systems. My Intel dual 2011 board (Sandy/Ivy Bridge) with dual Xeons has... exceptionally poor performance in certain applications relative to findings over at NATEX with a multitude of identical systems. I may have enabled HPET on accident with a monitoring application at some point. Although it's not likely a root cause of my performance issues, anything I can scrape out of the system would be nice.
I choose to not believe AnandTech's convenient flaw with regards to the Intel's default vs. forced HPET performance. I will wait for several other hardware reviewers to confirm or debunk these results before I would tell anyone to make a decision. Something smells fishy about this whole thing. Why is the Intel HPET now all the sudden an issue? Or a better question, is AnandTech now in Intel's pocket?
"Why is the Intel HPET now all the sudden an issue?"
... because other websites do not force HPET on with intel CPUs during benchmarks, meaning they never encountered this issue. Anandtech was forcing HPET to be used which is not the default state and caused problems.
I think it would be useful / important in future CPU reviews to include a couple tests that measure the HPET performance impact when forced on. People will want to know, and it provides an interesting side-story for new CPUs or updated platforms / OS.
Also, I think it provides a public service, since if ordinary users run into this (perhaps by some third party software install forcing it on) they might go crazy trying to understand why their gaming performance tanked. A (small) page on this topic in each new CPU review will remind people that this is an important thing to consider if they are debugging issues on their own system!
Inframall is a construction service provider in Ernakulam. They provides a better hassle free construction experience to the customers. Visit : http://www.inframall.net
Get certified professionals for all the civil construction work right from foundation stage to the completion stage at a minimum cost . Click to find services : http://www.inframall.net/civil.php
Get ready to grab the best Flooring services in Ernakulam . We provide all the services of flooring including floor tile supply, professional flooring works etc at reasonable price. http://www.inframall.net/flooring.php
Inframall undertakes all the roofing works including roofing sheet supply, roofing tiles supply, repair works and all other roofing related construction works at lowest rate.http://www.inframall.net/roofing.php
Get experienced professionals from Kochi, Ernakulam and near locations to complete your plumbing requirements including man power and material supply. Click Details : http://www.inframall.net/plumbing.php
Grab the best offers on electrical construction services. We carry professional electrical fitting services and fitting items at lower cost. Complete details @ http://www.inframall.net/electrical.php
Find and choose the top best interior designers and architects for your home or commercial building constructions from our certified associate platform. We undertake all interior works at minimum rate in Ernakulam. http://www.inframall.net/interior-design.php
Inframall in Ernakulam undertake all the painting works including material supply and man power. We aggregates all the professionals and material suppliers under single roof and makes the customers easy to completes their construction works.http://www.inframall.net/painting.php
After so many decades and with so many transistors available can't Intel/AMD add better more efficient and accurate ways of getting time (e.g. monotonic time and also "real time")? They keep adding YetAnotherSIMD but how about stuff like this? For bonus points add efficient easy ways to set (and cancel) interrupts that will trigger after certain times.
It's a total misconception that any circuit can get these times more accurately and efficiently at the same time. There's no demand for it on a broad scale either.
This has been something the gaming community has known about and discussed for years, you can find posts about HPET all over popular forums. However, they aren't backed by any sort of meaningful data and much like with core parking and vpns for gaming, legitimate hardware testing websites generally have either turned up their nose at it or disregarded them as complete snake oil.
I'm glad HPET is finally getting looked at in real depth. What was discussed here was just their real effect on quantifiable metrics, such as benchmarks, but what gamers discuss is their impact on stuttering or microstutters as well as hit registration in netcode (which is extremely dependent on timing). That wasn't looked at at all here. While this discussion was mainly focused on the difference between results between other websites and Intel vs AMD, I think that's not quite the right way to approach this. Rather it should be looked at the effect of timers in general on gaming as a whole.
The statement that you guys received from Intel pretty much makes it blatantly clear that no one really had any idea what was going on with the timers over there or that they really had a big impact on anything outside of synthetic results. Microsoft has just put band aids on top of band aids to keep everything running and it got to the point where it's no longer transparent to people who are buying hardware, people who are making hardware, or people who are developing software (beyond a few very niche groups) how they all interlock and intermingle with each other. I didn't until I did some digging and required even more to learn there were more timers besides HPET between multiple and sometimes vaguely related forum posts.
A higher resolution timer should be good, especially for video games, but the impact it has on the system because of it's crude and backwards implementation has made it such that it's basically just a synthetic cog that can't be used in practice. It makes you wish there was a solution that just put everything on the same page, hardware and software. I'm sure game developers who just lease a engine and then essentially make a mod have no idea what's going on here and developers who actually make the engine (Dice, Epic, Crytek) may not even know there is a problem in the first place.
I do hope Anand takes this a bit further with frame time benchmarks and maybe FCAT designed to look specifically at this. As was mentioned in this article, implementations even seem different across different motherboards, which is a very, very bad thing and should also be looked at. There is a lot of room for future articles here focused around this specific issue until there is some remote amount of standardization.
If you're looking for more interesting things to test - almost no one tests net code in video games, with the handful of people who do making arbitrary comparisons and really having no tools or benchmarks to work with, even though video games (especially highly competitive ones) are extremely dependent on such things, especially when you get into the top 10% of the player base. People just assume gaming code 'works' and that magical part of games are all created equal when it couldn't be any further from the truth. Net code is literally trying to hold together what is essentially a train wreck while trying to mask it from it's users as best as possible. Some games do it a lot better then others.
I don't understand. The review should be made not based on how AMD or intel wants you to setup your PC, but how it ships by default. Most users don't fiddle with BIOS and don't understand HPET. The review should be done with default settings. You install the CPU, install windows and bam...review. Fiddling with options might be good as a side project to test stuff, but in most cases, people use it as is.
yes, but default (out of box) settings are more useful and informative for basic hw comparison benchmarks
then one can add extra 'tweaked' benchmark data same way as OC results also it would be nice to share/provide the what tweaks have been used (e.g. regedit file, etc)
as for a lot of extra work arguments/comments, sure, but that's what makes the difference, and people who are interested (=enthusiasts) will happily look into it...and appreciate the extra effort
So were there more corrected/confirmed results coming, or , just this vague rehash of 'we blame chipset and WIndows timing'on our crappy results' fiasco for a summary :)
However, it sadly appears that reality diverges from theory – sometimes extensively so – and that our CPU benchmarks for the Ryzen 2000-series review were caught in the middle.
Open your web browser be it on the computer or mobile and, search for mx player apk download. Obviously, you are going to get tons of links there. But I recommend going with the mxplayerdownload.co one.
The HPET, despite its name, is not more accurate. The TSC timer is accurate to CPU clock-cycle precision, which is usually more than two orders of magnitude better than the HPET. shorturl.at/hzBFG
The TSC timer is accurate to CPU clock-cycle precision, which is usually more than two orders of magnitude better than the HP<a href="http://www.heenakhan.com">E</a>T
And this is what Inspectre says when open: Photo upload with Light-shoot tool http://prntscr.com/kj4f47
And this is the report via get-processpeculationcontrolsettings at power shell https://prnt.sc/kj4fdj
I don't have a server just a regular guy that want to get the most performance form my cpu without any patch apply via Windows, is there a way to disable the cpu meltdown?
tried also disabling the set process mitigation system and the cpu meltdown still not disabled.. x.x
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
242 Comments
Back to Article
Dr. Swag - Wednesday, April 25, 2018 - link
It looks like you guys are re running all the benchmarks in the original review then, right? I see that the results look to be changed and less CPUs are on the lists (since you haven't rerun them all, I assume)Ryan Smith - Wednesday, April 25, 2018 - link
Correct. We knew at the start of the Ryzen 2 review what benchmarks and what products we wanted to include; this timer issue hasn't changed that.freaqiedude - Wednesday, April 25, 2018 - link
So would it be fair to say that Intel’s HPET implementation is potentially buggy? It seems to cause a disproportionate performance hit.chrcoluk - Wednesday, April 25, 2018 - link
no its just that TSC + lapic is the way to go, There is a reason thats the default in windows and other modern OS's.DanNeely - Wednesday, April 25, 2018 - link
It suggests that their implementation could probably be made less impactful than it currently is; but that high precision timers have had a performance impact has been known for a long time. In its guise as the multi-media timer in Windows over a decade ago the official MS docs recommended using lesser timing sources in lieu of it whenever possible because it would affect your system.What's new to the general tech site reading public is that there are apparently significant differences in the size of the impact between different CPU families.
Tamz_msc - Wednesday, April 25, 2018 - link
But is there a 'real' performance impact or does default HPET behavior simply introduce a fudge factor that alters how the tools report the numbers? Is there a way to verify the results externally?eddman - Wednesday, April 25, 2018 - link
I'm wondering about the same thing. Do the games' frame rate really change (they get smoother or vice versa) or the timer just messes up the numbers reported by benchmarks and the games' actual frame rate that reaches the display doesn't change?rahvin - Wednesday, April 25, 2018 - link
I'd be more concerned that Intel has found a way to make the timer report false benchmarks that are higher than they actually are. I'd also be curious if the graphics card/cpu combination is potentially at fault.Nvidia has been shown to cheat in the past on benchmarks by turning off features in certain games that are used for benchmarking to boost the score. Is Intel doing something similar?
Rob_T - Wednesday, April 25, 2018 - link
I came across a similar issue on VMware, where a virtual machine's clock would drift out of time synchronisation. The cause of this was that VMWare uses a software based clock and when a host was under heavy CPU load the VM's clock wouldn't get enough CPU resource to keep it updated accurately. This resulted in time running 'slowly' on the virtual machine.Under normal circumstances this kind of time driftissue would be handled by the Network Time Protocol daemon slewing the time back to accuracy; the problem is the maximum slew rate possible is limited to 500 parts-per-million (PPM). Under peak loads we were observing the VM's clock running slow by anywhere up to a third. This far outweighed the ability of the NTP slew mechanism to bring the time back to accuracy.
If this issue has the same root cause, the software based timers would start to run slowly when the system is under heavy load. Therefore more work could be completed in a 'second' due to it's increased duration. It would be interesting to know if the highest discrepancy were also the ones with the largest CPU loads? Looking at the gaming graphs on page 4 the biggest differences are at 1080p which suggests this might be the case.
oleyska - Wednesday, April 25, 2018 - link
You also had Idle issue with windows servers where time would drift., the high load I've never heard of in our company we have thousands upon thousands of vm's using vmware though.Lord of the Bored - Thursday, April 26, 2018 - link
Not just nVidia. AMD's graphics division has been known to do it too, going back to when it was still an independent company. See: QUAFF3.EXEMaxiking - Saturday, April 28, 2018 - link
Yeah like AMD was using less demanding tessellation to boost their score in benchmarks or less demanding AF ignoring completely what an application set. . Oh yeah, I forgot, when the same thing happened to AMD, it was a bug, when to Nvidia, cheating. Those double standards are hilarious.Fallen Kell - Wednesday, April 25, 2018 - link
The overhead of HPET causes the Intel CPUs to effectively slow down with the combination of the Meltdown+Spectre fixes. HPET is a system call which results in the harshest penalty of the fixes for Meltdown+Spectre. Add to the fact that Intel's implementation of HPET is higher fidelity (i.e. higher speed clock rate) than the specification requires, and then combine that higher fidelity with creating an even larger load on the CPU (due to Meltdown+Spectre), and it creates the large performance degradation.The other timers (TSC + lapic) do not incur as high a penalty, as these do not result in system calls which need to be protected from Meltdown/Spectre exploitation.
mczak - Wednesday, April 25, 2018 - link
The higher clock the HPET runs at for intel should make absolutely no difference - it's the cost of reading the timer which counts, the rate it's running at should not be relevant. (Although I higher frequency may have higher hw implementation cost.).For this kind of slowdown as shown in some games though there have to be LOTS of timer queries. But I suppose it's definitely possible (I suspect nowadays everyone uses the TSC based queries and forget to test without them being available), which are much faster in hw and don't require syscalls. Meltdown (and probably to a lesser degree Spectre) could indeed have a big impact on performance with HPET (if there's that many timing queries). I'd like to see some data with HPET but without these patches.
looncraz - Wednesday, April 25, 2018 - link
I believe, from my own testing, that it's merely a factor of reporting. HPET has always resulted in a smoother, faster, system with less stutters when I enable it.I also use SetTimerResolutionService to great effect.
tamalero - Thursday, April 26, 2018 - link
Its interesting how it is very different from person to person.I usually had HPET on with my intel Quadcores..
But once I got my Threadripper I had to disable and remove HPET. If not I would get horrible stuttering.
GreenReaper - Friday, April 27, 2018 - link
It may be more *responsive*, yet able to do less work. In fact, speed and latency can be opposites - if you never pick your head up while doing a task, you'll probably execute it in the fastest possible time, at the expense of anything else that you might have wanted to do during that time. Most interactive users don't appreciate the computer not paying attention to them, so desktop computers tradeoff for speed for reduced latency - although with multi-core systems the impact is small.In the past when the TSC was not present, HPET also moved the system more towards being a real-time system, and the cost of that the overhead of the more frequent timer checks. Nowadays, especially with nonstop TSC (not impacted by power management), I'm not sure that is the case, but using it might still change latency - for good or ill.
Enabling HPET does not mandate forcing its use over the alternatives, but mandating it with 'bcdedit /set useplatformclock true' does. You can do the equivalent on Linux. And clearly, there is a cost, although that cost varies greatly depending on the platform and what you are doing.
Over the years I've seen numerous cases of people trying to reduce the number of gettimeofday calls to increase performance, and the cost of checking the HPET is probably one of the reasons.
Ryan Smith - Thursday, April 26, 2018 - link
"But is there a 'real' performance impact or does default HPET behavior simply introduce a fudge factor that alters how the tools report the numbers? Is there a way to verify the results externally?"Yes, you can compare the clocks with various applications, including the Timers applications. Which in our case shows that neither the ACPI nor QPC timers are drifting versus the HPET timer. So the performance difference really is a performance difference, and not a loss of timer accuracy.
https://images.anandtech.com/doci/12678/TimerBench...
chrcoluk - Wednesday, April 25, 2018 - link
Disabling TSC is a big nono.Forced HPET as an example will disable optimised interrupts for network cards, the AMD and intel review support staff probably dont have the knowledge to correctly advise you. I suggest you contact microsoft.
Also your cpu's tested, what decides what cpu's you decided to compare to? the AMD review guide or something else? Was a lot of omitted CPU's and noticeably no manual overclocked intel cpu's in your data.
See my first reply, if AMD software is forcing HPET and especially if they not informing the user the consequences of doing this then thats very irresponsible.
eddman - Thursday, April 26, 2018 - link
Some rather interesting info on timers from MS: https://msdn.microsoft.com/en-us/library/windows/d...andrewaggb - Thursday, April 26, 2018 - link
that MS link was very informative. Thankschrcoluk - Wednesday, April 25, 2018 - link
So the question is, What other things in the OS have you "tweaked" and not disclosed?peevee - Thursday, April 26, 2018 - link
This.Krysto - Wednesday, April 25, 2018 - link
Windows just release 2 new Spectre mitigations - make sure to include those, too.mode_13h - Thursday, April 26, 2018 - link
Any success replicating these results with their new BIOS/AGESA? Or is that a Linux-only fix?https://www.phoronix.com/scan.php?page=article&...
GreenReaper - Friday, April 27, 2018 - link
It may also cause a significant improvement in Raven Ridge performance:https://overclock3d.net/news/cpu_mainboard/agesa_v...
takeshi7 - Wednesday, April 25, 2018 - link
How do I check and/or change the OS-level HPET setting in Windows?I'd like to know if my system is in the forced-HPET mode.
Alistair - Wednesday, April 25, 2018 - link
run cmd prompt as an administratorbcdedit /enum
if there is a line like "use platformclock on" , then it is forced on
if no line, then it is off
BIOS is a "off" / "available" toggle, so there is no "on" in bios
JoeyJoJo123 - Wednesday, April 25, 2018 - link
I'm probably misinterpreting this, and feel free to correct me, but initial read of the title would suggest to the common user that "AMD Ryzen 2 has an issue with its internal timer", but upon reading, not only does Ryzen 2's HPET have a minimal performance hit compared to default behavior, but Intel's _own_ HPET is the variant with the LARGEST performance penalty.This information would imply that the Ryzen 2 benchmarks done so far only stand to gain a bit of performance (under default configurations), but nothing more than ~15% on very select game titles at 1080p, and less than ~5% elsewhere. (This is a good thing, as the good Ryzen 2 benchmarks only stand to gain a bit by measuring performance with HPET off.) It also then implies that Intel should likely be reviewing their HPET implementation to ensure that there's a minimal performance hit for applications which use this feature.
Ryan Smith - Wednesday, April 25, 2018 - link
"I'm probably misinterpreting this, and feel free to correct me, but initial read of the title would suggest to the common user that "AMD Ryzen 2 has an issue with its internal timer", but upon reading, not only does Ryzen 2's HPET have a minimal performance hit compared to default behavior, but Intel's _own_ HPET is the variant with the LARGEST performance penalty."Ultimately this article was a follow-up to our Ryzen 2 review, so it was necessary that it referenced it. (However the use of an Intel CPU picture was also very intentional)
ReverendCatch - Wednesday, April 25, 2018 - link
I feel like what this ultimately means is Intel has issues with HPET, and that the results everyone else are getting are the problematic ones, not you guys. By forcing a more precise timer, intel's... I dunno... "advantage" as it were is eliminated.Seriously, AMD is 1% or less variance despite the timer used. Intel is upward of 30% or more. To me, that is a giant red flag.
nevcairiel - Wednesday, April 25, 2018 - link
Except that normal systems are not going to force HPET, so the more real-world realistic tests/results should really be used.ReverendCatch - Wednesday, April 25, 2018 - link
The question I suppose I have is, are the results even real or legit at that point. Why does the intel suffer tremendously when using an accurate timer, and pulls ahead when not?How does that not sound fishy to you?
tmediaphotography - Wednesday, April 25, 2018 - link
From my reading of the article, it seems that Intel takes a larger hit because they use a more accurate HPET timer (24Mhz on the 8700K), and thus it is more taxing on the system. The calls are very much under the umbrella of things more negatively affected by Spectre, and as the i7 8700K system had an HPET rate at closing on 2x as much as the R7 2700K, it stands to reason the i7 is going to benefit much more from it being turned off.tl;dr, the more accurate timer is much more needy on the system, and the system under spectre/meltdown takes an even larger hit at the IO calls to it.
Billy Tallis - Wednesday, April 25, 2018 - link
The HPET can run at a higher frequency without generating more CPU overhead, because it's really just a counter. Making that counter's value grow more quickly doesn't mean the CPU gets more interrupts per second.patrickjp93 - Wednesday, April 25, 2018 - link
Because it makes perfect sense. Intel's losing more clock cycles since it is at vastly higher clock speeds, and it has Meltdown to contend with on top of Spectre. HPET from my cursory reading is 4 system calls compared to just 2 for TSC+lapic. The performance hit of that should then surprise no one.With AVX-512, Intel has a lot of very high throughput instructions that AMD doesn't. If your software uses them, Intel pulls ahead vs. the best equivalent you could write for Epyc. That's not fishy. You're just taking the more optimal path to solving your problem. When Cascade Lake X and Cannon/Ice Lake arrive, this will all be fixed at the hardware level and the overhead will disappear.
Cooe - Wednesday, April 25, 2018 - link
Except that isn't actually true in practice for a wide variety of actual AVX-512 enabled workloads. Running those insanely wide registers drastically increases power draw & thermal ouput and as a result clock-speeds take a nose-dive. In certain SIMD workloads capable of AVX acceleration, this clock-dropoff is so large that EPYC outperforms Skylake-X's AVX-512 support using much much narrower AVX2 instructions/registers simply because it can maintain vastly higher clock-speeds during the load.Heck AnandTech even verified this with their own testing way back when. https://www.anandtech.com/show/12084/epyc-benchmar...
patrickjp93 - Wednesday, April 25, 2018 - link
If you're using the widest registers, yes, but there were also a lot of 128 and 256-bit extensions added that were missing from the AVX/2 stack. And Intel will bring the power draw down and the clocks up over time.Dolda2000 - Wednesday, April 25, 2018 - link
The HPET, despite its name, is not more accurate. The TSC timer is accurate to CPU clock-cycle precision, which is usually more than two orders of magnitude better than the HPET.Billy Tallis - Wednesday, April 25, 2018 - link
The difference between accuracy and precision is probably important here. TSC is definitely far more precise, but overclocking can make it much less accurate.BillyONeal - Wednesday, April 25, 2018 - link
The TSC is *clock cycle* accurate but not *real time* accurate. It speeds up and slows down relative to real time with changes in CPU clock speed; such as what CPUs do on their own when system power state changes.That is, when a hypothetical 1.6GHz chip downclocks to 800MHz, the TSC's rate relative to real time is cut in half.
mczak - Wednesday, April 25, 2018 - link
No, that was true maybe 10+ years ago.There's several flags to indicate TSC properties:
- constant (fixed clock rate, but may be halted depending on C-State)
- invariant (runs the same independent from C-State)
All intel cpus since at least Nehalem (the first Core-i chips) should support these features (not entirely sure about AMD, probably since Bulldozer or thereabouts).
The TSCs are also usually in-sync for all cpu cores (on single socket systems at least), albeit I've seen BIOSes screwing this up majorly (TSC reg is allowed to be written, but this will destroy the synchronization and it is impossible to (accurately) resync them between cores - unless your cpu supports tsc_adjust meaning you can write an offset reg instead of tsc directly), causing the linux kernel to drop tsc as a clock source even and using hpet instead (at least at that time the kernel made no attempt to resync the TSCs for different cores).
So on all "modern" x86 systems, usually tsc based timing data should be used. It is far more accurate and has far lower cost than anything else. If you need a timer (to generate an interrupt) instead of just reading the timing data, new cpus actually support a tsc_deadline mode, where the local apic will generate an interrupt when the TSC reaches a programmed value.
mczak - Wednesday, April 25, 2018 - link
FWIW I think the reason Ryzen Master (and some other software for OC) requires HPET is because, while the TSC frequency is invariant, it might not be as invariant as you'd like it to be when overclocking (though Ryzen Master has the HPET requirement fixed a while ago).Ryzen PPR manual (https://support.amd.com/TechDocs/54945_PPR_Family_... says that the TSC invariant clock corresponds to P0 P-State - this would be cpu base clock. So then naturally from that follows if you were to change the base clock for overclocking, the TSC clock would change too, causing all sort of mayhem since likely the OS is going to rely on TSC being really invariant (as it's announced as such).
That said, this manual says (for MSRC001_0015) there's a "LockTscToCurrentP0: lock the TSC to the current P0 frequency" bit. It does just what the doctor asked for:
"LockTscToCurrentP0: lock the TSC to the current P0 frequency. Read-write. Reset: 0. 0=The TSC will count at the P0 frequency. 1=The TSC frequency is locked to the current P0 frequency at the time this bit is set and remains fixed regardless of future changes to the P0 frequency."
So maybe they are now setting this (or maybe they always set this and requiring HPET had other reasons...).
BillyONeal - Wednesday, April 25, 2018 - link
Because Intel has mitigated Spectre microcode available, and no such thing is available for AMD yet. Intel is paying that context switching overhead and AMD isn't (yet).Spunjji - Thursday, April 26, 2018 - link
Factually incorrect here:https://arstechnica.com/gadgets/2018/04/latest-win...
Given AT are running brand-new AMD CPUs with the latest version of Windows 10, I'm pretty sure they have this code active.
Nutty667 - Thursday, April 26, 2018 - link
It's nothing todo with the accuracy of HPET, but the cost in reading HPET.Reading HPET is an IO operation and system call, which means you hit the Meltdown mitigation penalty, something that AMD does not suffer from.
chrcoluk - Wednesday, April 25, 2018 - link
no forcing HPET is a very unusual config, no modern OS has it as the default time on modern hardware. Not only is it slower but also things like msi-x require LAPIC to work.In short what anandtech did here is "very bad".
patrickjp93 - Wednesday, April 25, 2018 - link
Well, for every OS but the BSD variety.Ratman6161 - Friday, April 27, 2018 - link
"forcing HPET is a very unusual config,"Actually I think it may be very common. For example, based on what I read, in the article my system will have it forced and I never even knew it (will check it as soon as I get home). See, I always do my overclocking from the bios/uefi stings. But a while back, just for grins I decided to try out Ryzen Master. I messed around with it for a while, didn't really like it, and uninstalled it. But the story says when I installed it and rebooted, then it was forced on and that setting did not go away when I uninstalled Ryzen Master. So, essentially anyone who has ever used Ryzen Master or a similar tool will be forced on unless they knew enough to turn it off. I certainly had no clue how this worked and I'm betting that most other people were as clueless as me.
Ratman6161 - Friday, April 27, 2018 - link
"In short what anandtech did here is "very bad"."Or you could interpret it as very good because all of us who had no clue have now learned something :)
mapesdhs - Sunday, May 6, 2018 - link
Indeed, all of this could explain why I had some weird results a couple of years ago when testing certain setups with a 980 Ti, I think one of the oc tools may have forced HPET in the manner described. I need to check. To be precise, with hindsight it correlates with the time MSI changed Afterburner to make use of RivaTuner. I could be wrong, but just maybe RivaTuner forces HPET as the main timer...HStewart - Wednesday, April 25, 2018 - link
Did you hear the news of "Chris Hook" senior Marketting director at AMD leaving. It possible that the effect of Raju leaving AMD has much more effect then people realize.It good to see competition out there, it helps the industry stay a live - but I concern that at least for AMD that it will be a short fused.
Hifihedgehog - Wednesday, April 25, 2018 - link
Chris Hook was not very tech savvy for someone of his position so I am relieved he has left. He started working at ATi around 2000, yet he claimed in his farewell letter that high resolution gaming at the time constituted 320x240. We are all fortunate that the company has shed his weight and his position is now in far more competent hands.HStewart - Wednesday, April 25, 2018 - link
It is interesting that people say so much good things about people while they were working there but when they leave they make them sound like traitors - but I see it differently - for someone to be there so long and such a high position - they maybe seeing writing on the wall.Big question is where did Chris Hook go - and yes back in early days we did not have 4k or even 1080P. I actually talk to ATI developers during latte 1990's and early 2000. Back then GPU's were stupid
Keep in mind this guy was Marketing, not technical
But to me this is sign that AMD is burning the candle at both ends if you combine it with Raju
oleyska - Wednesday, April 25, 2018 - link
Have you seen ALL the big players that was with AMD and ATI when they were really good?They are rejoining, it's a writing on the wall.
AMD is a good company to work for again, those who worked there a decade ago was really passionate about their work and company then Hector Ruiz came and it ended with bulldozer, selling off glofo, overpaying for ATI and left when nothing was in any way positive apart from an GPU hardware product stack that was good.
Hifihedgehog - Wednesday, April 25, 2018 - link
This as well.Hifihedgehog - Wednesday, April 25, 2018 - link
This is an exaggeration when, in reality, it is strictly a case by case basis that such judgement calls should wisely to be made. For example, when Steve Ballmer left Microsoft, it was a positive thing overall for the entire company to be shed of his weight. Usually, these “writing on the wall” conclusions are prematurely drawn by news venues who are desiring to make a hot, trending news item that are not necessarily accurate or fact-based. As to here, in truth, I never held Chris Hook or Raju Koduri in high regard, before, during or after their announced exoduses from AMD—both came across as underqualified and unknowledgeable for their respective management levels.Manch - Wednesday, May 2, 2018 - link
HStewart is a Intel fan boy. Despite the turnaround for AMD in the CPU space, according to HStewart the sky is always falling for AMD. Intel can do no wrong.Death666Angel - Monday, April 30, 2018 - link
"and yes back in early days we did not have 4k or even 1080P."Yeah, but my first PC when I was 12 years old in 2000 had a mediocre monitor with a 800 x 600 resolution (friends had 1024 and a bit later 1280). Him saying that in 2001, when he joined, 320 x 240 was high res just seems ignorant. Or a crappy joke.
arashi - Monday, July 2, 2018 - link
HStewart displays the mental agility of a drunken cricket unless it is to defend Intel. Don't bother.Cooe - Wednesday, April 25, 2018 - link
Chris Hook was a marketing guy through and through and was behind some of AMD's worst marketing campaigns in the history of the company. Him leaving is total non-issue in my eyes and potentially even a plus assuming they can replace him with someone that can actually run good marketing. That's always been one of AMD's most glaring weak spots.HilbertSpace - Wednesday, April 25, 2018 - link
Thanks for the great follow up article. Very informative.Aichon - Wednesday, April 25, 2018 - link
I laud with your decision to reflect default settings going forward, since the purpose of these reviews is to give your reader a sense of how these chips compare to each other in various forms of real-world usage.As to the closing question of how these settings should be reflected to readers, I think the ideal case (read: way more work than I'm actually expecting you to do) would be that you extend the Benchmarking Setup page in future reviews to include mention of any non-default settings you use, with details about which setting you chose, why you set it that way, and, optionally, why someone might want to set it differently, as well as how it might impact them. Of course, that's a LOAD of work, and, frankly, a lot of how it might impact other users in unknown workflows would be speculation, so what you end up doing should likely be less than that. But doing it that way would give us that information if we want it, would tell us how our usage might differ from yours, and, for any of us who don't want that information, would make it easy to skip past.
phoenix_rizzen - Wednesday, April 25, 2018 - link
Would be interesting to see a series of comparisons for the Intel CPU:No Meltdown, No Spectre, HPET default
No Meltdown, No Spectre, HPET forced
Meltdown, No Spectre, HPET default
Meltdown, No Spectre, HPET forced
To compare to the existing Meltdown, Spectre, HPET default/forced results.
Will be interesting to see just what kind of performance impact Meltdown/Spectre fixes really have.
Obviously, going forward, all benchmarks should be done with full Meltdown/Spectre fixes in place. But it would still be interesting to see the full range of their effects on Intel CPUs.
lefty2 - Wednesday, April 25, 2018 - link
Yes, I'd like to second this suggestion ;) . No one has done any proper analysis of the Meltdown/Spectre performance on Windows since Intel and AMD released the final microcode mitigations. (i.e post April 1st).FreckledTrout - Wednesday, April 25, 2018 - link
I agree as the timing makes this very curious. One would think this would have popped up before this review. I get this gut feeling the HPET being forced is causing a much greater penalty with the Meltdown and Spectre patches applied.Psycho_McCrazy - Wednesday, April 25, 2018 - link
Thanks to Ryan and Ian for such a deep dive into the matter and for finding out what the issue was...Even though this changes the gaming results a bit, still does not change the fact that the 2700x is a very very competent 4k gamer cpu.
Zucker2k - Wednesday, April 25, 2018 - link
You mean gpu-bottle-necked gaming? Sure!Cooe - Wednesday, April 25, 2018 - link
But to be honest, the 8700K's advantage when totally CPU limited isn't all that fantastic though either. Sure, there are still a handful of titles that put up notable 10-15% advantages, most are now well in the realm of 0-10%, with many titles now in a near dead heat which compared to the Ryzen 7 vs Kaby Lake launch situation is absolutely nuts. Hell, even when comparing the 1st Gen chips today vs then; the gaps have all shrunk dramatically with no changes in hardware and this slow & steady trend shows no signs of petering out (Zen in particular is an arch design extraordinarily ripe for software level optimizations). Whereas there were a good number of build/use scenerios where Intel was the obviously superior option vs 1st Gen Ryzen, with how much the gap has narrowed those have now shrunk into a tiny handful of rather bizarre niches.These being those first & foremost gamers whom use a 1080p 144/240Hz monitor with at least a GTX 1080/Vega 64. For most everyone with more realistic setups like 1080p 60/75Hz with a mid-range card or a high end card paired with 1440p 60/144Hz (or 4K 60Hz), the Intel chip is going to have all of no gaming performance advantage whatsoever, while being slower to a crap ton slower than Ryzen 2 in any sort of multi-tasking scenerio, or decently threaded workload(s). And unlike Ryzen's notable width advantage, Intel's in general single-thread perf is most often near impossible to notice without both systems side by side and a stopwatch in hand, while running a notoriously single-thread heavy load like some serious Photoshop (both are already so fast on a per-core basis that you pretty much deliberately have to seek out situations where there'll be a noticeable difference, whereas AMD's extra cores/threads & superior SMT becomes readily apparent as soon as you start opening & running more and more things concurrently. (All modern OS' are capable of scaling to as many cores/threads as you can find them).
Just my 2 cents at least. While the i7-8700K was quite compelling for a good number of use-cases vs Ryzen 1, it just.... well isn't vs Ryzen 2.
Tropicocity - Monday, April 30, 2018 - link
The thing is, any gamer (read: gamer!) looking to get a 2700x or an 8700k is very likely to be pairing it with at least a GTX 1070 and more than likely either a 1080/144, a 1444/60, or a 1440/144 monitor. You don't generally spend $330-$350/ £300+ on a CPU as a gamer unless you have sufficient pixel-pushing hardware to match with it.Those who are still on 1080/60 would be much more inclined to get more 'budget' options, such as a Ryzen 1400-1600, or an 8350k-8400.
There is STILL an advantage at 1440p, which these results do not show. At 4k, yes, the bottleneck becomes almost entirely the GPU, as we're not currently at the stage where that resolution is realistically doable for the majority.
Also, as a gamer, you shouldn't neglect the single-threaded scenario. There are a few games who benefit from extra cores and threads sure, but if you pick the most played games in the world, you'll come to see that the only thing they appreciate is clock speed and single (occasionally dual) threaded workloads. League of Legends, World of Warcraft, Fortnite, CS:GO etc etc.
The games that are played by more people globally than any other, will see a much better time being played on a Coffee Lake CPU compared to a Ryzen.
You do lose the extra productivity, you won't be able to stream at 10mbit (Twitch is capped to 6 so its fine), but you Will certainly have improvements when you're playing the game for yourself.
Don't get me wrong here; I agree that Ryzen 2 vs Coffee Lake is a lot more balanced and much closer in comparison than anything in the past decade in terms of Intel vs AMD, but to say that gamers will see "no performance advantage whatsoever" going with an Intel chip is a little too farfetched.
Spunjji - Thursday, April 26, 2018 - link
Is there any other kind? Either you're at the budget end where everything is GPU limited or at the high-end where not spending a decent amount on a monitor to go with your £500 GPU is a crying shame.There's a niche where Intel has a clear win, and that's people running 240Hz 1080p rigs. For most folks with the money to spend, 2560x1440 (or an ultra-wide equivalent) @ 144hz is where it's at for the ideal compromise between picture quality, smoothness and cost. There are a lot of monitors hitting those specs right now.
eva02langley - Wednesday, April 25, 2018 - link
I was mentioning in the review that 1080p benchmarks need to go... now it is even more true with HPET.Kudos on this guys, it is really interesting to read.
DanNeely - Wednesday, April 25, 2018 - link
>93% of Steam gamers main display is at 1080p or lower.If the new review suit split what GPUs were run at what resolutions, dropping 1080p from the high end card section might be reasonable. OTOH with 240hz 1080p screens a thing there's still an enthusiast market for 1080p combined with a flagship GPU.
IndianaKrom - Wednesday, April 25, 2018 - link
* Raises hand, that's me, someone with a GTX 1080 and 240Hz 1920x1080 display.The industry seems obsessed with throwing higher and higher spacial resolution at gamers when what I really want is better temporal resolution.
eva02langley - Thursday, April 26, 2018 - link
1080p @ 60Hz which is a non issue because we are talking about RX 580/1060 GTX or below. At that point the GPU is the bottleneck.It only affect 1080p @ 144Hz with a 1080 GTX/Vega 64 minimum which is really < 2%.
You are really the exception, however the 1080p CPU bottleneck focus on you entirely without even taking in consideration other Use Cases.
Holliday75 - Thursday, April 26, 2018 - link
I am willing to bet that 95%+ of Steam users have no clue what we are talking about and don't care.mapesdhs - Sunday, May 6, 2018 - link
IndianaKrom, are you aware that using high(er) frequency monitors retrains your brain's vision system so that you become tuned to that higher refresh rate? New Scientist had an article about this recently; gamers who use high frequency monitors can't use normal monitors anymore, even if previously they would not have found 60Hz bothersome at all. In other words, you're chasing goalposts that will simply keep moving by virtue of using ever higher refresh rates. I mean blimey, 240Hz is higher than the typical "analogue" vision refresh of a bird. :DIMO these high frequency monitors are bad for gaming in general, because they're changing product review conclusion via authors accepting that huge fps numbers are normal (even though the audience that would care is minimal). Meanwhile, game devs are not going to create significantly more complex worlds if it risks new titles showing more typical frame rates in the 30s to 80s as authors would then refer to that as slow, perhaps criticise the 3D engine, moan that gamers with HF monitors will be disappointed, and I doubt GPU vendors would like it either. We're creating a marketing catch22 with all this, doubly so as VR imposes some similar pressures.
I don't mind FPS fans wanting HF monitors in order to be on the cutting edge of competitiveness, but it shouldn't mean reviews become biased towards that particular market in the way they discuss the data (especially at 1080p), and it's bad if it's having a detrimental effect on new game development (I could be wrong about the latter btw, but I strongly suspect it's true from all I've read and heard).
We need a sanity check with frame rates in GPU reviews: if a game is doing more than 80 or 90fps at 1080p, then the conclusion emphasis should be that said GPU is more than enough for most users at that resolution; if it's well over 100fps then it's overkill. Just look at the way 8700K 1080p results are described in recent reviews, much is made of differences between various CPUs when the frame rates are already enormous. Competitive FPS gamers with HF monitors might care, but for the vast majority of gamers the differences are meaningless.
Luckz - Monday, May 14, 2018 - link
So the real question is if someone first exposed to 100/120/144 Hz immediately squirms in delight, or if they only vomit in disgust months later when they see a 60 Hz screen again. That should be the decider.Spunjji - Thursday, April 26, 2018 - link
1080p is popular in the Steam survey where, incidentally, so is low-end GPU and CPU hardware. Most of those displays are 60hz and an awful lot of them are in laptops. Pointing at the Steam surveys to indicate where high-end CPU reviews should focus their stats is misguided.I'm still not certain that testing CPUs in a way that artificially amplifies their differences in a non-CPU-reliant workload is really the way to go.
ElvenLemming - Wednesday, April 25, 2018 - link
You can just ignore the 1080p benchmarks if you don't think they're meaningful. As DanNeely said, 93% of surveyed Steam users are 1080p or lower, so I'd be shocked if more than a handful of review sites get rid of it.eek2121 - Sunday, April 29, 2018 - link
Steam benchmarks are meaningless unless you can filter based on several factors. OEM vs Custom builds, country, etc. That is why I consider anybody that brings up steam survey users to not know what they are talking about. The US has around 300 million people in the world, we spend the highest amount on PC hardware in the world, yet China has billions of people, and they spend the least. Steam groups everyone together. Second, OEM systems force HPET to on. I just checked my laptop running an i7 mobile 6700HK or whatever, and HPET was on in both the BIOS as well as in Windows. So no, you can't make assumptions. Custom builders typically have HPET off, and OEM builders has HPET on. If I were AT, I'd force HPET on. Not to screw one company over vs another, but to force them to improve their HPET implmentations.Sancus - Wednesday, April 25, 2018 - link
Yeah benchmarking CPUs exclusively with via GPU bottlenecked tests is a great idea.IndianaKrom - Wednesday, April 25, 2018 - link
One does have to question the usefulness of a 4k benchmark in a CPU review, other than "yep, its still GPU limited". Whole bunch of graphs showing +/- 3%, content to pad the ads with I guess...GreenReaper - Friday, April 27, 2018 - link
I imagine the intent is to let people who want to be 4K gamers right know that it doesn't matter what CPU they get. Or just to find interesting anomolies. You don't know what you don't test.RafaelHerschel - Wednesday, April 25, 2018 - link
This makes no sense. I use a GTX 1070 and game on 1080p, let's forget about the reasons why for now, I'm not alone in this. Most gamers, even most of those with slightly above average cards use 1080p monitors. HPET is not an issue, very few people force HPET on. There is a reason only AnandTech got these numbers.SkyBill40 - Friday, April 27, 2018 - link
I'm in a similar situation but with a 1080. I'm playing at 1920*1200 @ 60Hz only because I've got other financial priorities keeping me from buying a new monitor.eek2121 - Sunday, April 29, 2018 - link
You are also alone. You cannot rely on steam survey numbers to claim supremacy, as I mentioned in an earlier comment, a steam survey can be nothing more than a dell machine with a GTX 1070 running 1080p, or it can be a Threadripper machine running dual 1080tis @ 1080p. In my situation, I game at 1440p on a 1950x and a 1080ti running 1440p. HPET off. does that make me a minority? No it does not. There is no way to measure per capita spending on PC hardware due to the second hand market, and different demands in different countries. My thought is to force HPET on for all, and may the best company win....just like with Anti Aliasing in gaming.Peter2k - Wednesday, April 25, 2018 - link
As for the last questionMaybe it should be made clear if its on/forced on and so forth
This is anecdotal
But my testing has shown in a handful of games that HPET is detrimental to gaming (7600K at 5.2Ghz)
Fps where the same, but HPET introduced a stutter the whole time
Now I also could've sworn HPET was default off in the UEFI on my Z170
Maybe that's something to look into as well
How is HPET set at default in the UEFI
If its default on with Z370 then it should be made clear it's on
Its default off for older/newer chips it should be made clear me thinks
GreenReaper - Friday, April 27, 2018 - link
The default would be for it to be enabled, since it is a standard feature of the platform nowadays. However, forcing it to be used instead of the CPU's TSC (when it is also available) is not standard in most modern operating systems where the TSC is known to be reliable, or can be made so.eek2121 - Sunday, April 29, 2018 - link
A stutter is a defect. I favor neither Intel nor AMD in this case, however IMHO there can be only 2 outcomes: a) Intel fixes it's HPET implementation or b) Microsoft removes HPET altogether. Only then will we receive the true numbers.jjj - Wednesday, April 25, 2018 - link
" however it is clear that there is still margin that benefits Intel at the most popular resolutions, such as 1080p."That's a false and highly misleading statement, it's not about the resolution it is about an over-dimensioned GPU for a given resolution so , easiest way to put it, high FPS gaming.
90% will game at 1080p with a 1060 not a 1080.
Marketing might have moved rich children from 30-60FPS to 120FPS but people are not made out of money and you know very well how limited high end GPU volumes are.
For now you should test with and without HPET at least for a few results and highlight the HPET impact..
One thing I did not notice being addressed after flying over the article is the accuracy of the results with HPET disabled. How certain are you that the results are not way off to favor Intel now?
Maxiking - Wednesday, April 25, 2018 - link
The only one misleading and false statement is that 90% will play at 1080p with a 1060.Remember, in the future, 1160 will be probably more powerful than 1080, 1260 than 1280 and so on. The bottleneck is still here, not gonna disappear, will get only bigger with more powerfull cards.
Regardless, how certain are you that the results are not way off to favour AMD now?
Maxiking - Wednesday, April 25, 2018 - link
Damn it, why there is no option to edit messages. *Powerful* kek.RafaelHerschel - Wednesday, April 25, 2018 - link
Games get more demanding. I'm convinced that at some point 1080p will become obsolete, but we are not there yet. For me 1080p maxed out (sometimes with DSR enabled) looks good enough and ensures that I get the smoothness that is important to me.mapesdhs - Sunday, May 6, 2018 - link
Where's the evidence games are becoming more demanding? If that were true, typical frame rate spreads in reviews would not be going through the roof. It's been a very long time since any GPU review article talked about new visual features to enable more complex and immersive worlds. These days, all the talk is about performance and resolution support, not fidelity.jjj - Wednesday, April 25, 2018 - link
People buy GPUs by targeting the FPS they need inside a budget and sane people do not buy more than they need.And ofc as someone else pointed out, games evolve too, otherwise we would not need better GPUs.
Remember that GPUs have been around for decades, we know how things go.
eek2121 - Sunday, April 29, 2018 - link
Benchmarks should not be done on a 1060. The purpose of a CPU benchmark is to measure CPU performance. IMO a 1080ti at MINIMUM should be used to elimininate GPU bottlenecks. There are some games out there that still bottleneck at 1080p.eva02langley - Wednesday, April 25, 2018 - link
You are damn wrong. Sure you can see CPU bottleneck... however, can you? Now with HPET put into light, you can alter results dramatically for Intel, however is HPET a default function for the OS?Basically, you are telling me that benchmarks should have HEPT off, a configuration that is supposed to be set as default, just because we can see which architecture is better in a non conventional use?
So what is the value of those precious 1080p benchmarks if they don't represent the configuration the typical end user is going to use the product for in its intended use?
It is coming back to the USE CASE.
If a budget user buy an RX 560, CPU choice at 1080p won't matter.
If a mid range user buy an RX 580/1060 GTX, CPU choice at 1080p won't matter.
If a high user buy a 1080 GTX/Vega 64, CPU choice at 1080p @ 144 Hz will barely matter.
If an enthusiasm user buy a 1080 TI, CPU choice will matter @ 144 Hz.
And now... what happens with HPET in the picture? How can you accurately render results without biasing yourself anymore?
One thing for sure, Intel needs to fix their stuff.
eva02langley - Wednesday, April 25, 2018 - link
"If an enthusiasm user buy a 1080 TI, CPU choice at 1080p will matter @ 144 Hz."Mistake
malakudi - Wednesday, April 25, 2018 - link
Thank you for the analysis. Can you somehow verify that very large variations (RoTR 1-2-3, Civ6) of performance on i7-8700K with HPET not forced are real? Is it possible that the reported FPS are wrongly calculated when using non-HPET timer? Can you also get a comment from the developers of those games about this result? 45,76 and 69% performance difference does not seem normal.Ryan Smith - Thursday, April 26, 2018 - link
"Thank you for the analysis. Can you somehow verify that very large variations (RoTR 1-2-3, Civ6) of performance on i7-8700K with HPET not forced are real?"Can and done. Using the Timers application we can compare the outputs of all of the timers, and ignoring the ancient 1KHz RTC timer, all of the important timers show no drift versus HPET. So there isn't a loss of accuracy affecting the calclation of frame rates.
https://images.anandtech.com/doci/12678/TimerBench...
peevee - Thursday, April 26, 2018 - link
"and ignoring the ancient 1KHz RTC timer"And yet it is _RT_C and should not be ignored. Moreover, it is more than precise enough for every test running at least a second.
peevee - Thursday, April 26, 2018 - link
Just use an external clock (take a 60+ fps video camera, record the sequence, see what is going on outside of the clocks on the same computer).Holliday75 - Thursday, April 26, 2018 - link
What if the clock on the camera is using HPET? Osnap.Suddenly the only clock I trust is in Boulder Colorado.
kpb321 - Wednesday, April 25, 2018 - link
IMO if it has that dramatic an effect on things I'd like to see continued testing and coverage of the issue. As pointed out it's not exactly hard to end up with HPET turned on. I'm pretty sure I've launched the Ryzen master at least once on my system so it should be forced on. I also use my MB mfgs fan controller software which may or may not do the same thing.Testing every game on every system with HPET off and then on may not be practical but I'd still like to see tests for CiV6 and/or RoTR as those seem to be the biggest outliers until the impact becomes minimal or statistically meaningless.
Silma - Wednesday, April 25, 2018 - link
Kudos to the Anandtech team, its integrity, its in-depth knowledge, its hard work!This is why Anandtech is my number one trusted source for benchmarks.
johnsmith222 - Wednesday, April 25, 2018 - link
I've searched youtube and was suprised with number of videos addressing Intel HPET problem:https://www.youtube.com/results?search_query=intel...
Bassicaly it is known problem.
Hifihedgehog - Wednesday, April 25, 2018 - link
First, Smeltdown and now an HPET bug. This Intel HPET bug deserves an article entirely of its own. Haha...Crazyeyeskillah - Wednesday, April 25, 2018 - link
It's not a bug. You literally have no concept of what you are talking about, yet you are commenting on an article that explains it in greater detail than ever put to page. HPET has to be used in extreme overclocking scenarios as windows 8/10 create variances in those situations. Ryan misinterpreted that it needed to be on always, and thus this situation was born.HPET isn't a but, it's a setting in bios that forces clock synchronization on the faulty Windows 10/8 system that can give incorrect data (ie: benchmark times ect.)
RafaelHerschel - Wednesday, April 25, 2018 - link
Do you realize that they got it wrong? They deviated from the default situation and got results that were misleading...eva02langley - Thursday, April 26, 2018 - link
They didn't got it wrong, they simply used default setting for default systems. Even Intel told them to leave HPET on.eddman - Thursday, April 26, 2018 - link
No, Ian was forcing HPET to be used in the benches. He didn't use the default state. Intel did not tell them to FORCE HPET to be used either.peevee - Thursday, April 26, 2018 - link
They did got it wrong. 100% their fault, with the stupid excuse that their background is in overclocking.rocky12345 - Wednesday, April 25, 2018 - link
Good to see things cleared up on this. My question is this I under stand that on the AMD's systems turn HPET to forced on from Ryzen Master needing it am I right on that. So that explains why it was turned on for the AMD systems but if it was not at default for the Intel systems as well how or what changed it to forced on the Intel systems? Was it changed in the Intel bios to enabled which then forced the OS to use the forced on option. My other concern is that if it eats away at so much performance why havn't Intel and AMD come up with better ways to deal with this issue or is it kinda like a newer problem because of Spectre/Meltdown patches and micro code updates on the Intel platform and HPET in forced mode kills performance because of that.johnsmith222 - Wednesday, April 25, 2018 - link
They've forced HPET on in benchmarks via script (as I understand from article) and for AMD it is irrelevant be it on or off (also explained in the article).rocky12345 - Wednesday, April 25, 2018 - link
So basically the moral of the story here is leave things as the hardware vendor intended or in default settings and everything should be fine. This does raise about a million more questions on how reviewers should or even need to change the way they setup the gear for reviewing etc etc. It also confirms well this and probably a lot of other variables in the hardware that can skew results one way or another it answers the question or at least part of it as to why the same hardware performs so differently from review to review. Just for the record I am not saying Anandtech in any way tried to skew the numbers in anyway I am very sure that is not the case here.Maxiking - Wednesday, April 25, 2018 - link
Well, if you have been enforcing HPET on for all those years, it pretty much means that all the tests on this site are not valid and not representative at all.HPET is widly known as the reason causing several perfomance issues /stuttering, fps drops on cpus with more cores/ but I never personally believed it because there was no benchmarks to support it only some geeks on forums posting their latency screens with HPET on/off and anecdotal evidence from the people who allegedly gained/lost fps by turning it on/off.
The point is..The benchmarks here are not run on the same stick of RAMS /frequencies, timings/ but the highest official supported frequency is used to simulate what the platform is capable of.
So why turning/enforcing something on by default if it could potentionally cause performance regression and makes your avg, min, max, 99th percentile absolutelly skewed?
peevee - Thursday, April 26, 2018 - link
This!mapesdhs - Sunday, May 6, 2018 - link
This what? Lost me there.Btw, some older benchmarks don't work with HPET off (I think 3DMark Vantage is one of them).
lefenzy - Wednesday, April 25, 2018 - link
I'm pretty confused. In these cases, it's the denominator (time) that's changing that affects the resultant performance assessment right? The raw performance (numerator) is unchanged.e.g. FPS = frames / time. Frames remain the same, but time is measured differently.
JlHADJOE - Wednesday, April 25, 2018 - link
This.It stands to reason that there is no actual performance difference, just an inconsistency in how time is measured. For that matter, we're not even sure whether either system is accurately timing itself.
IMO we shouldn't be trusting the benchmarked system's timer at all. Run an ntp server elsewhere on the network and get time from that before and after each benchmark. Likewise all gaming results really should go through an external tool like FCAT.
AFAIK it's only in the PC industry that benchmarks trust the system being measured to do book/time keeping for itself, which is kinda nuts considering the system clock will be going from base to boost and each core will be running at different frequencies, and the whole system is subject to thermal swings.
ReverendCatch - Wednesday, April 25, 2018 - link
Agreed, using the system to basically audit itself, is kind of a flaw in the design of testing.However, easily applying a third party time index isn't so easy? I guess you could film each game's performance on the monitor with a high speed camera, but parsing that data would be nightmarish at best.
Easiest way would be to use an external computers (such as a web time server) timestamp before the test, and when it finishes, with the variation being the average ping time to the server. I guess. But that changes the way testing and benchmarks are done.
eddman - Wednesday, April 25, 2018 - link
The solution already exists. DigitalFoundry does it. They capture the output video with an external device and then run it through a special software that is able to determine frame times and produce a frame rate graph. This is how they manage to determine exact frame rates for consoles even.Cooe - Wednesday, April 25, 2018 - link
FCAT testing. Super expensive to do right (requires beefy enough hardware on both the dedicated capture rig & it's actual video capture card itself such that the video capture of whatever's being tested doesn't drop a single frame [as the capture rig isn't what's being tested/analyized, it needs to be as close to perfect frame-pacing/capture as possible]), but suuuuper freaking awesome haha.I'm pretty sure Digital Foundries FCAT analysis software was even designed in-house. Lol Richard's steezy FCAT testing has become like his calling card by this point.
Cooe - Wednesday, April 25, 2018 - link
FCAT testing. Super expensive to do right (requires beefy enough hardware on both the dedicated capture rig & it's actual video capture card itself such that the video capture of whatever's being tested doesn't drop a single frame [as the capture rig isn't what's being tested/analyized, it needs to be as close to perfect frame-pacing/capture as possible]), but suuuuper freaking awesome haha.I'm pretty sure Digital Foundries FCAT analysis software was even designed in-house. Lol Richard's steezy FCAT testing has become like his calling card by this point.
BillyONeal - Wednesday, April 25, 2018 - link
If HPET results in a system call, it is both. The Meltdown and Spectre mitigations make ordinary system calls *much* more expensive, and AMD's platform isn't mitigating those yet.Topweasel - Wednesday, April 25, 2018 - link
More stringent testing of HPET needs to be done. It could be the case that everything is performing the same in all tests but the results are reporting the wrong numbers (which I would assume would be the case for the HPET not forced results). But forcing the HPET when not expected could be causing other timer related issues in the programming that could result in loss of performance.ReverendCatch - Wednesday, April 25, 2018 - link
Yeah, basically. It's the time portion that is problematic. It's been the case since reviewers were reviewers and using FPS.A more accurate measure would be frames rendered for the same, identical test, for each system. Most games do not provide such information or tests, though.
Alistair - Wednesday, April 25, 2018 - link
No, I believe he was saying that if you aren't messing around with extreme OC and altering base clocks etc., the time portion is always accurate. The raw performance does change from the CPU overhead of HPET in Intel systems, by a lot in some cases.BillyONeal - Wednesday, April 25, 2018 - link
Not just extreme OC; anything that changes the clock speed, for example the CPU down clocking at idle, will change the rate of TSC relative to "real time". HPET exists to be the arbiter of "real time" unmoored from CPU frequency.eddman - Friday, April 27, 2018 - link
No, TSC in modern CPUs is constant.gammaray - Wednesday, April 25, 2018 - link
could you run tests in 1440p? thx.brxndxn - Wednesday, April 25, 2018 - link
Seriously.. 1080p sucks.crimson117 - Thursday, April 26, 2018 - link
They run in both 1080p and 1440p.When you are examine CPU performance, 1440p is usually bottlenecked by the GPU, so the CPUs are all waiting around for the GPU and don't get to really show who's faster.
When you run at 1080p, the GPU has no problem handling that, so CPUs are no longer waiting around for the GPU. More responsive CPU's keep up with the GPU to provide super high framerates. Slower CPUs will drag the system down, lowering framerates especially in CPU-intensive situations like tracking lots of players or mobs at once.
chrcoluk - Wednesday, April 25, 2018 - link
You didnt ask the most important people, Microsoft the developers of Windows. They state the default timers shouldnt be fiffled with unless you are debugging timer problems or have a specific need to force a certian timer. TSC is the best performing timer for modern processors.I remember a couple of years back when I followed a silly guide on the net to force HPET in windows and later discovered it was the blame for weird stutters I had in games.
If AMD's own software is forcing HPET in the OS and especially they not telling the end user what they doing then thats very irresponsible.
ReverendCatch - Wednesday, April 25, 2018 - link
On the other hand, everything was 1% difference, which is barely even academic.Tomb Raider seems to be an outlier on both platforms.
Alistair - Wednesday, April 25, 2018 - link
Yes I'd like to see AMD change their software to avoid using HPET. It shouldn't be doing so.Evil Underlord - Wednesday, April 25, 2018 - link
"both companies seem to be satisfied when HPET is enabled in the BIOS and irreverent when HPET is forced in the OS"God save us from irreverent chip manufacturers.
mildewman - Wednesday, April 25, 2018 - link
A really interesting article. I disabled HPET at motherboard level a few months ago in the pursuit of lower usb latency, and noticed it also made game framerates slightly smoother. (i5 3470, Z77 chipset, win10 x64)unbellum - Wednesday, April 25, 2018 - link
I would love to see your results and hear your thoughts with regard to the compilation benchmark. As a developer fighting long build times, this is extremely relevant to my current work.mapesdhs - Sunday, May 6, 2018 - link
Have you monitored your CPU usage and I/O behaviour? Just wondering if an NVMe SSD would help, assuming you don't already have one.agilesmile - Wednesday, April 25, 2018 - link
I can't understand why you portray HPET as a magical highest precision timer? TSC is faster and more accurate when it has proper implementation (modern CPUs).Would be really useful to test how overclocking modern CPUs affect TSC and maybe report bugs to the CPU manufacturers if it still does.
FYI here's more about TSC and HPET: https://aufather.wordpress.com/2010/09/08/high-per...
Senti - Wednesday, April 25, 2018 - link
TSC isn't only the highest resolution timer – it's also the cheapest one in terms of latency.It has only 2 major problems:
1) On some really old CPUs it's tied to actual CPU clock and changes according to frequency change.
2) It's tied to system base clock and changes with it.
But since base clock overclocking is dead you can pretty much consider TSC as a stable timer now.
Billy Tallis - Wednesday, April 25, 2018 - link
There's also the inconvenience that the TSC is a per-core timer, and it's hard to get the TSCs exactly synchronized between cores, so software that needs really high resolution timing also needs to worry about thread pinning.BillyONeal - Wednesday, April 25, 2018 - link
Not to mention cross socket and power management impacts!Billy Tallis - Thursday, April 26, 2018 - link
The power management effects were fixed way back with Nehalem. With even desktop CPUs doing clock speed changes all the time (eg. Turbo Boost), TSC would be useless if it didn't account for any of that. Nowadays, the TSC is only vulnerable to distortion from unusual sources of clock speed changes, like BCLK overclocking or drift in the clock generator.Senti - Wednesday, April 25, 2018 - link
Plenty of words and nothing about another solution of timing problems: drop always-in-beta Win10 and test on stable Win7.You write that you care about 'gamers' and 'default configuration' and ignore that Win7 share is almost 2x the Win10 one (according to Steam). In enterprise there is even less love for Win10.
BillyONeal - Wednesday, April 25, 2018 - link
That's awfully hard given that Win7 isn't supported on Ryzen.SkyBill40 - Friday, April 27, 2018 - link
^*DING, DING!*
mapesdhs - Sunday, May 6, 2018 - link
Plenty of mbd vendors support Win7 with Ryzen, whatever the official support is supposed to be. Most mbd vendors are not so dumb as to lock out the largest share of the market.bbertram - Wednesday, April 25, 2018 - link
Well this is interesting! This could have serious implications.Googled HPET really quick and found this: https://www.reddit.com/r/Planetside/comments/416ns...
and then I found this link from that thread....a little ironic.
https://forums.anandtech.com/threads/do-you-have-h...
bbertram - Wednesday, April 25, 2018 - link
An interesting article that talks more about the issue. They look to even have a benchmark to show the impact. The video is also very interesting. The more I research this problem the more i see its been know for a very long time now.https://tinyurl.com/yd8qsh7w
bbertram - Wednesday, April 25, 2018 - link
ohhh...more nice info: https://tinyurl.com/yd39zw8c_mat - Wednesday, April 25, 2018 - link
Very thorough article. I like to point out a few things though, that may add some information to this.AMD and especially Intel have swept this problem under the rug since the launch of Skylake X. I noticed this problem while benching for a review and initially thought that my OS installation was the cause. After some testing I finally found the same root of evil as Ian did. At that time I made a video and called it the "Intel X299 HPET" bug (can't post a link, it was already mentioned in the comments here).
I tried to talk to PR and engineers at Intel for quite a while and they heard about my bug report but refused to comment. Time went by and Threadripper and Coffee Lake were born, both inheriting the same slow HPET QPC timer calls. I informed Intel repeatedly, still no comment.
During that time I wrote the following benchmark that sheds some light on the whole QPC and timer business on Windows. It shows your Windows timer configuration, gives recommendations for precision and performance, provides a way to bench your QPC timer in a synthetic and a game test and gives easy access to change TSC to HPET and vice versa.
As I am not able to post a link here, please search for "TimerBench", you will be able to download it.
I am also the author of GPUPI, one of those benchmarks for overclockers mentioned in the article that enforced HPET timers for secure timing a while back. Since discovering the HPET bug I have pulled back on this restriction. Since Skylake HPET is no longer necessary to avoid BCLK skewing, iTSC is just fine. AMD is still affected though, possibly Ryzen 2 as well (Threadripper and Ryzen 1 was).
bbertram - Wednesday, April 25, 2018 - link
Link to download: https://tinyurl.com/y7w6tg36Link to article: https://tinyurl.com/yd8qsh7w
Arbie - Thursday, April 26, 2018 - link
Wow! Google translator is amazing when going from German to English!mapesdhs - Sunday, May 6, 2018 - link
Might be because English has its roots in Germanic languages. :D Old English sounds a lot like common words in Dutch, and there's a region in Germany where the way German is spoken can sound to other Germans to be rather like English (according to a German guy I know). It's all those pesky Saxons, Angles, etc. :DTrackSmart - Wednesday, April 25, 2018 - link
Thank you _mat! Hopefully your comment gets attention here at Anandtech, and in turn, this article and your work get some attention from Intel. On the AMD side, it sounds like enabling HPET has only a small penalty in most cases, but those differences on the Intel side are very troubling. At the very least we should be forewarned!Dark_wizzie - Wednesday, April 25, 2018 - link
What software causes HPET to be forced on in Windows? I have multiple software installed but it still appears off.Dec666 - Wednesday, April 25, 2018 - link
Hi, AT.First of all, I wanted to thank you for an extreme effort you put in your reviews and analysis.
My thinking on the subject is that if you disable HPET in OS, this may make your numbers and review conclusion be irrelevant to the real world scenarios. As you have said, many programs (like video streaming, monitoring/overclocking, and potentially motherboard software (not to say about Ryzen Master)) require HPET to be enabled in OS and they will force it during the installation process and most likely won’t inform you about this. That means, that if you’ve installed all the software you going to use on fresh OS (and/or fresh PC), it is very possible that some of that software will have HPET forced and you won’t know about it.
To my mind, most of people, who read CPU reviews, are enthusiasts and/or those, who want to make a decision on CPU purchase by themselves. The majority of people will just buy PC based on others’ opinion or consultant’s advise. So those, for whom 10% difference in performance matters, and/or those, who bought expensive GPU like 1080/1080ti, will probably use monitoring software like HWinfo or Afterburner. That means, that HPET will be forced on their systems. That means, that they will have real world numbers close to what you’ve got in the original Ryzen 2000 review.
Another thing is that by disabling HPET in OS, while doing tests for a new review, you will hide the problem with it on Intel systems. People will not consider this as a potential performance hit or disadvantage of Intel platform in general.
Moreover, I suspect that in future more programs and, probably, next-gen games will require HPET (in order to better synchronize even more threads). Since most of people buy CPU for more than one year, they will have potentially worse experience with Intel CPUs in future, compared to AMD CPUs.
So it looks more logical to me to test CPUs with HPET forced (for all software), but have additional tests with HPET disabled for just games in order to have games tested with HPET both on and off. That will emphasize the problem. For me this is the same reason why it is important to test hardware with all Smeltdown patches and BIOS updates installed.
Thanks.
Alistair - Wednesday, April 25, 2018 - link
Nothing I use in Windows including MSI Afterburner forces HPET on. I think your conclusion follows from that wrong assumption.Dec666 - Wednesday, April 25, 2018 - link
If it doesn’t use HPET, then how it can be precise? My conclusion doesn’t follow any assumption. AT’ve said that any software can force HPET. Maybe Afterburner doesn’t do that right now. But it is a great chance that it so right now with other software or will be with Afterburner in future.eddman - Friday, April 27, 2018 - link
... becasue TSC in modern CPUs is invariant and accurate. HPET is not needed unless you're doing something out of the ordinary.IKeelU - Wednesday, April 25, 2018 - link
"However, it sadly appears that reality diverges from theory – sometimes extensively so."Please don't do this. There is no theory that states that HPET won't affect benchmarks, but rather an expectation that they will not. That is a different thing. I understand it's a common colloquialism to use "theory" in this way, but I also expect Anandtech to exceed common standards.
mapesdhs - Sunday, May 6, 2018 - link
The misuse of that word, and many others, is common in mainstream media. It probably starts in state education.akamateau - Wednesday, April 25, 2018 - link
How does HPET affect DX11 or DX12 bench results? I find it somehow odd that an obscure BIOS or O/S switch can be so controversial.I would suggest some research regarding HPET settings vis-a-vis DX12 or DX11 and even Vulkan. Since Windows 7 and 8 is still in widespread use how hardware responds to the two major API's is relevant.
However there is also one system stress benchmark that while being in widespread use in European on-line media is almost completely ignored in the US with the occasional convenient exception of Toms hardware. That benchmark is the Chessbase Fritzmark.
Walkeer - Wednesday, April 25, 2018 - link
Best job Anadtech! this why I read your articled and regard you highest on the net: what you are doing is science. Many, many thanks for that! Important question: will you be using new AGESA 1.0.0.2a BIOS for ryzen 2xxx? It shows positive performance impact: https://www.phoronix.com/scan.php?page=article&...techguymaxc - Wednesday, April 25, 2018 - link
Kudos for identifying and correcting the fault in your test methodology, and especially for publishing your findings. I was about to write AT off with those sketchy gaming results but I can see there is no need, you have redeemed yourselves! Anand would be proud ;)ACE76 - Wednesday, April 25, 2018 - link
Am I the only one that thinks Intel may have tried pulling a fast one here? Can the performance boost be validated on the Intel side?mahoney87 - Wednesday, April 25, 2018 - link
A fast one? Nobody turned on HPET during benchmarking for at least a decade. They've started doing it because AMD said so cause there were some discrepencies going on when posting results on HWbenchtiwake - Wednesday, April 25, 2018 - link
Phoronix reports that there is an update AGESA 1.0.0.2a for the ASUS X470 motherboard he has that brings another 6% performance increase with seemingly everything on linux and the ryzen 7 2700X.Alexvrb - Wednesday, April 25, 2018 - link
"however the most gains were limited to specific titles at the smaller resolutions, which would be important for any user relying on fast frame rates at lower resolutions."Uhh, isn't that negated by more stuttering without HPET? Or does having it "available" provide the same real-world smooth gameplay as having it forced on, but somehow also boost benchmarks?
Ryan Smith - Thursday, April 26, 2018 - link
You shouldn't seeing stuttering in normal scenarios. If anythng, it's forcing the use of HPET that could lead to stuttering, since it's a relatively expensive system call to make.tmiller02 - Wednesday, April 25, 2018 - link
So... now this has me thinking... which results are accurate. Are the new findings used by intel to show an artificial boost on benchmarks... I just cant grasp this much of a performance difference just by hpet bsing forced on... it seems to be just the reporting is skewed.... which sounds very pro intel!Ryan Smith - Thursday, April 26, 2018 - link
As amazing as it looks, the new results are accurate. Forcing the use of HPET really does have a sizable performance impact in some of these games. Particularly, I suspect, any game that likes to call on OS for timers a lot.TheNerd389 - Thursday, April 26, 2018 - link
While I know that running your tests without HPET forced is most representative for most people, would it be unreasonable to ask that the results with HPET forced be presented moving forward?For instance, I use HPET timer for collecting performance data for software that I write. If enabling HPET can cause a 10-30% drop in performance, it makes a huge difference to me. That's enough of a difference to throw off the measurements of parallel fine-grained operations by a very substantial margin. In my case, that would result in improperly tuned code.
Based on your results with Ryzen 2, there is a much more significant difference between the 2700X and the 8700K than most reviews suggest for my application. That's an important insight from my perspective. If the pattern holds for the HEDT chips or, *shudder*, Epyc and Xeon, there is a lot to lose by not considering the effects of HPET. In those spaces, it could mean missing out on several thousand dollars worth of performance per CPU by choosing the wrong architecture.
Billy Tallis - Thursday, April 26, 2018 - link
This issue shouldn't matter at all in the server space, because one of the only reasons to force the HPET to be used as the primary timer is to get accurate timing when overclocking (or get results at stock speed that can be fairly compared against accurate overclocked results). Servers don't get overclocked, so they can rely on the TSC for most of their timing needs and not have to incur the HPET overhead on every time check. (The HPET will still get used for some things, but it doesn't have to be the only time source when the TSC is trustworthy.)bbertram - Thursday, April 26, 2018 - link
The problem is just not for overclocked CPUs. Also what if you don't know if HPET is being forced? Who knows to check for that? What software can force it on?TheNerd389 - Thursday, April 26, 2018 - link
Have you considered build farms and/or testing farms that gather performance data? Those are what I'm referring to here.bairlangga - Thursday, April 26, 2018 - link
Is there really no adverse effects on defaulting to not forcing HPET? Imho, calculation is not always on accuracy but also on the timely manner. In measurement, benchmarking, or maybe in controlling it would matter a lot. On the other hand in gaming I don't know if it's correct understanding but it could cause untimely frames, or part of it, ghosting, or artefacts, etc.Found an interesting article:
http://hexus.net/tech/news/cpu/103531-amd-tech-gur...
lenghui - Thursday, April 26, 2018 - link
Sounds like it's time to dig out that good old stopwatch from storage.haplo602 - Thursday, April 26, 2018 - link
And when MS changes HPET default to forced if detected ... then you are screwed again and have to retest ...You are at a dead-end actually. You are switching to HPET off (effectively) because it highly favors Intel in a few benchmarks yet AMD is mostly unaffected. Will you change that if the tables turn in the future again ?
Come on ... it is more about consistency. Your HPET forced mode definitely highlighted an issue with Intel chips yet instead of hitting on Intel about the issue you are changing your settings ...
Maxiking - Thursday, April 26, 2018 - link
"when MS changes HPET default to forced if detected"" if games start utilizing more cores "
It is a though job being an AMD fan these days, just "ifs" and "whens" all the time.
Maxiking - Thursday, April 26, 2018 - link
***tough***. damn you, autocorrecteva02langley - Thursday, April 26, 2018 - link
He's entirely right. HPET is an issue on Intel shoulder as of now. How can we be sure that without HPET on, Intel benchmarks are accurate?Also, you cannot turn off something in the BIOS that is supposed to be on, as mention by Intel, just because you want to give the crown to one manufacturer or the other. By the way, we are talking about 1080p benchmarks with a 1080 GTX. 60 Hz is irrelevant since a RX 580 can render it, it leaves only 1080p @ 144Hz.
Also, what about new games since these results seem to be linked to old games?
You don't get 40% more performance by switching something on and off in the BIOS. If it does, than something needs to be fixed.
RafaelHerschel - Thursday, April 26, 2018 - link
Well, I once disable my boot drive in the BIOS and experienced a 100% slow down. I'm tempted to agree with you and feel that something needs to be fixed so that my system works without a boot drive, but other people don't seem to agree. Opinions...Then there is the time when I disabled my NVDIA GPU in the BIOS of my laptop. Massive performance drop in games... Not good. Sad. Needs fixing.
jor5 - Thursday, April 26, 2018 - link
tl;dr - "We completely screwed up our review and made a show of ourselves - but we're not apologising"AndrewJacksonZA - Thursday, April 26, 2018 - link
You don't analyze data much in your day job, do you?SkyBill40 - Friday, April 27, 2018 - link
What would be more accurate than your statement is something like:"We found that our previous data contained some pretty significant inaccuracies and to be thorough, we're re-testing and improving our testing methodologies as a whole and explaining as such at length for the sake of transparency.Thanks for being patient with us."
OrphanageExplosion - Thursday, April 26, 2018 - link
What I don't understand is how the original gaming data ended up being published at all.The faulty results that were published were entirely at odds with the data supplied by AMD itself (which we've all seen - even prior to the reviews dropping, if you've been following the leaks). Surely if Ryzen 2000 was so much faster than Coffee Lake, they would have been shouting this from the rooftops - gaming is, after all, one of the few weaknesses Ryzen has. AMD's no-doubt massaged results were a ton more accurate than Anandtech's - madness.
Not only that but the Anand results showed a massive increase over Ryzen 1000 - which simply isn't feasible for what is effectively a mild refresh. Meanwhile, the results also showed Ryzen 5 handily beating an 8700K... surely you must have realised that something wasn't right at that point? Utterly baffling and calls into question your approach generally.
This is a major hit to credibility. I mean, if you're going to publish a CPU 'deep dive', surely you need to actually analyse the data and be ready to question your results rather than just hitting the publish button?
RafaelHerschel - Thursday, April 26, 2018 - link
It's extremely disappointing. The original benchmarks should not have been published. But since that happened, they should have been removed or at the very least, there should have been a far more obvious (and stronger) disclaimer.Even their mea culpa isn't very strong. The article is well written, but should have started with: "We made a mistake". I dislike the misleading statement that other publications did not all install the latest patches. Some publications did, others gave a good reason why they didn't.
Mistakes happen, but after days of showing incorrect benchmarks people started speculating and even now there are still people spreading misinformation based on the AnandTech article.
This is an editorial problem. I feel very ambivalent about at this. AnandTech supplied some really interesting information, but if they can't redact wrong information in timely fashion, then AnandTech is not trustworthy.
I have reached my personal conclusion, but it is with regret. A few extra lines in the original version of the review would have made all the difference.
Ryan Smith - Thursday, April 26, 2018 - link
"This is an editorial problem. I feel very ambivalent about at this. AnandTech supplied some really interesting information, but if they can't redact wrong information in timely fashion, then AnandTech is not trustworthy."Ultimately that sits with me. Ian was able to repeat the earlier numbers again and again and again. Which indicated, at least initially, that it wasn't a mere fluke with our testing. The numbers we were getting were correct for the scenario we were testing. It just wasn't the scenario we thought we were testing.
It wasn't until he found the HPET connection a couple of days later that we even began to understand what was going on. And a couple of days later still until we had enough data to confirm our hypothesis.
As far as the original review goes, while it's been updated a few times since then, once we decided to audit our results, we did post a note on several pages about it. Which I hope was plainly visible. But if not, then that is very good feedback to have.
eva02langley - Friday, April 27, 2018 - link
Your methodology was good, it is just that an important factor... that should not be a factor... is actually crippling the competition.My question now is, odes HPET should be forced in today systems? For example, for security or other stability issues?
I mean, by playing in the lab, you did discover a huge problem and Intel needs to address IMHO.
Now it is coming back to the use of HPET and the requirements behind it.
Also, is this a problem with only older games? If this is the case, changing your bench suites should be a priority.
Tchamber - Thursday, April 26, 2018 - link
It's too bad you're so upset by this. Their first results, while not representative of peak performance, are indeed valid for the way they were achieved. They are not "wrong," so much as parallel. I hope they keep the results. As this review is the first one to have this issue, I'm glad to see they discovered the cause of the performance scores in such a short time. Keep up the good work, Ian!AdditionalPylons - Thursday, April 26, 2018 - link
Fascinating!MDD1963 - Thursday, April 26, 2018 - link
All that's really missing is the admission that tinkering with the HPET settings crippled Intel performance, and gave AMD the lead which they would not have without having done so...; correct? :) (Nice attempted deflection with the 'someone at Intel told us it would not matter' spiel....!) :)BikeDude - Friday, April 27, 2018 - link
As a current Intel user, I'm more worried about what software is using this timer and how often I have been exposed to this sort of non-stellar performance.My next upgrade will take this into consideration. Best of luck to you all.
_mat - Thursday, April 26, 2018 - link
Seems like the HPET bug finally gets the right momentum. Great!I finally wrote an English article about the HPET bug. There are a lot of misconceptions going on right now about that the bug is and what it's not. It also explains what the HPET timer problem was once, why it matters again today and which platforms are affected. Sadly this shows the way Intel handles bug reports like this as well.
https://tinyurl.com/ybz8qygj
Be sure to try TimerBench as well, my Windows timer benchmark! It has already been posted here so I won't bother you again (and the download is in the article).
Maxiking - Thursday, April 26, 2018 - link
Good job, needs more attention so this Meltdown/Spectre bullshits die ASAP.The video was published on 23 Jul 2017.
AndrewJacksonZA - Thursday, April 26, 2018 - link
You raised an important, IMHO, point:Do game engines use HEPT information in their logs, their AI calculations, their gfx calculations, and whatever else they do inside?
Why this is important in my opinion, is because what will the users' experience be out of the box? They build their PC, they install Windows, they install the game, and then... do they disable or enable HEPT before they play? No, they run the darn game!
We trust you to give us review results that would typically represent what we will get. Non-technical users also trust you to give them review results that would typically represent what they will get, no fiddling around because they don't know how and aren't interested to do so. Please take this comment into account when deciding if you're going to be flipping HEPT switches with every game on both CPU brands.
______________________________
And hey, I didn't see it, but did you do any comparisons on if GPU maker makes a difference to the HEPT impact on CPU maker?
Nvidia GPU + Intel CPU
Nvidia GPU + AMD CPU
AMD GPU + Intel CPU
AMD GPU + AMD CPU
AMD + AMD APU
AMD + Intel APU
Intel + Intel APU
bbertram - Thursday, April 26, 2018 - link
I think you will see alot of websites testing these combinations and re-validating their results. How do we trust any benchmarks now? Going to be some fun reading in the coming weeks.Ryan Smith - Thursday, April 26, 2018 - link
"Please take this comment into account when deciding if you're going to be flipping HEPT switches with every game on both CPU brands."Thankfully, we have no need to flip any switches for HPET. The new testing protocol is that we're sticking with the default OS settings. Which means HPET is available to the OS, but the system isn't forced to use it over all other timers.
"And hey, I didn't see it, but did you do any comparisons on if GPU maker makes a difference to the HEPT impact on CPU maker?" We've done a couple of tests internally. Right now it doesn't look like it makes a difference. Not that we'd expect it to. The impact of HPET is to the CPU.
HeyYou,It'sMe - Thursday, April 26, 2018 - link
Even before the patches, using the HPET timer causes severe system overhead. This is a known issue that is exacerbated slightly by the patches, but there isn't a massive increase in overhead. AnandTech should post HPET overhead before and after the patches. You will find the impact is much the same.eva02langley - Thursday, April 26, 2018 - link
Also, HPET seems to have a higher impact on old games. Maybe it was the way older engine were developed.Also, are we sure HPET is not just messing with the FPS data since the timing could be off?
peevee - Thursday, April 26, 2018 - link
Great illustration of the phrase that "it's better not to know than know something which isn't so".Standard 1kHz RTC is good enough for all real performance measurement where measured tasks run for at least a second or two (otherwise such performance just does not matter in the PC context). Multiple measurement, plus elimination of false precision from averaging the results would eliminate all errors significant for the task.
When you have to change default system configuration to run the tests, the tests reflect these non-default configurations nobody is running at home or at work, and as such simply irrelevant.
pogostick - Thursday, April 26, 2018 - link
I don't understand how using HPET on Intel could have such a drastic effect. Just the fact that it is available slows the system down? How? A benchmark only needs to access this timer once at the beginning and once at the end. There is no need for incessant polling of the clock. Is the only way to guarantee that you are using it to force it on for the whole system? What do these differences look like in other OSes? There are way too many questions unanswered here.Is it not more likely that using non-HPET timers allows the platform to essentially create it's own definition for what constitutes "1 second"? Wouldn't using a timer based on the core tend to stretch out the definition of "1 second" over a longer period if the core becomes heavily taxed or heated?
These systems need to be tested with a common clock. Whether that is some specialized pcie device, or a network clock, or a new motherboard standard that offers special pins to an external clock source, or whatever, is to be determined elsewhere. All boards need to be using the same clock for testing.
Ryan Smith - Thursday, April 26, 2018 - link
I don't understand how using HPET on Intel could have such a drastic effect. Just the fact that it is available slows the system down?It's not that it's available is the problem. The issue is that the OS is forced to use it for all timer calls.
"How?"
Relative to the other timers, such as QPC, HPET is a very, very expensive timer to check. It requires going to the OS kernel and the kernel in turn going to the chipset, which is quite slow and time-consuming compared to any other timer check.
"A benchmark only needs to access this timer once at the beginning and once at the end. There is no need for incessant polling of the clock."
Games have internal physics simulations and such. Which require a timer to see how much time has elapsed since the last step of the simulation. So the timer can actually end up being checked quite frequently.
"Is the only way to guarantee that you are using it to force it on for the whole system?"
As a user, generally speaking: yes. Otherwise a program will use the timer the developer has programmed it to use.
"Wouldn't using a timer based on the core tend to stretch out the definition of "1 second" over a longer period if the core becomes heavily taxed or heated?"
No. Modern Invariant timers are very good about keeping accurate time, and are very cheap to access.
pogostick - Friday, April 27, 2018 - link
Thank you.risa2000 - Monday, April 30, 2018 - link
"Games have internal physics simulations and such. Which require a timer to see how much time has elapsed since the last step of the simulation. So the timer can actually end up being checked quite frequently."Would it be too difficult to set up a profiler session and count how many times is HPET called and eventually even from where?
To produce such an impact it must be a load of calls and I still cannot imagine why so many.
Next, why Intel suffers so much by forced HPET compared to new AMD?
Kaihekoa - Friday, April 27, 2018 - link
Excuses excuses.Timur Born - Friday, April 27, 2018 - link
It's noteworthy that HPET use at default Windows settings is a black box, aka Windows decides whether to use it or not unless software makes an effort to get more control over it. Windows decision to use one or the other time depends on hardware platform, Windows revisions (even small updates) and even the mixture of software you are currently running.This also means that the HPET "bug" reported here and on Overclocker.at can hit everyone without them even knowing when and why. Some people prefer to disable the HPET completely, albeit I am not a fan of this "solution". Instead I would very much expect Intel to get a hold of the situation and fix the issues their hardware is experiencing when HPET is used.
Again, the default Windows behavior is not to *not* use HPET, but only to make seldom use of it if applicable. Seldom/applicable use issues are still issues.
eva02langley - Friday, April 27, 2018 - link
You guys should really analyze the FPS with specific tools like slow motion camera for calculating the refresh rate. Something is fishy here...Also, how in hell is AMD not affected? Is it because Intel is reaching too much FPS in some games at 1080p?
Is this a software or hardware issue? Is it the same on Linux... if Linux is having HPET?
Jacobb20970 - Friday, April 27, 2018 - link
Going to see if, perhaps, HPET timings are even more granular on "server"-class systems. My Intel dual 2011 board (Sandy/Ivy Bridge) with dual Xeons has... exceptionally poor performance in certain applications relative to findings over at NATEX with a multitude of identical systems. I may have enabled HPET on accident with a monitoring application at some point.Although it's not likely a root cause of my performance issues, anything I can scrape out of the system would be nice.
Ninjawithagun - Friday, April 27, 2018 - link
I choose to not believe AnandTech's convenient flaw with regards to the Intel's default vs. forced HPET performance. I will wait for several other hardware reviewers to confirm or debunk these results before I would tell anyone to make a decision. Something smells fishy about this whole thing. Why is the Intel HPET now all the sudden an issue? Or a better question, is AnandTech now in Intel's pocket?eddman - Saturday, April 28, 2018 - link
"Why is the Intel HPET now all the sudden an issue?"... because other websites do not force HPET on with intel CPUs during benchmarks, meaning they never encountered this issue. Anandtech was forcing HPET to be used which is not the default state and caused problems.
LurkingSince97 - Friday, April 27, 2018 - link
I think it would be useful / important in future CPU reviews to include a couple tests that measure the HPET performance impact when forced on. People will want to know, and it provides an interesting side-story for new CPUs or updated platforms / OS.Also, I think it provides a public service, since if ordinary users run into this (perhaps by some third party software install forcing it on) they might go crazy trying to understand why their gaming performance tanked. A (small) page on this topic in each new CPU review will remind people that this is an important thing to consider if they are debugging issues on their own system!
ConstructionKing - Saturday, April 28, 2018 - link
Great PostConstructionKing - Saturday, April 28, 2018 - link
<a href="http://www.inframall.net/">Inframall</a... is a <a href="http://www.inframall.net/">construction service provider in Ernakulam</a> . They provides a better hassle free construction experience to the customers. Visit : <a href="http://www.inframall.net/">Inframall</a...ConstructionKing - Saturday, April 28, 2018 - link
Inframall is a construction service provider in Ernakulam. They provides a better hassle free construction experience to the customers. Visit : http://www.inframall.netConstructionKing - Saturday, April 28, 2018 - link
Get certified professionals for all the civil construction work right from foundation stage to the completion stage at a minimum cost . Click to find services : http://www.inframall.net/civil.phpConstructionKing - Saturday, April 28, 2018 - link
Get ready to grab the best Flooring services in Ernakulam . We provide all the services of flooring including floor tile supply, professional flooring works etc at reasonable price. http://www.inframall.net/flooring.phpConstructionKing - Saturday, April 28, 2018 - link
Inframall undertakes all the roofing works including roofing sheet supply, roofing tiles supply, repair works and all other roofing related construction works at lowest rate.http://www.inframall.net/roofing.phpConstructionKing - Saturday, April 28, 2018 - link
Get experienced professionals from Kochi, Ernakulam and near locations to complete your plumbing requirements including man power and material supply. Click Details : http://www.inframall.net/plumbing.phpConstructionKing - Saturday, April 28, 2018 - link
Grab the best offers on electrical construction services. We carry professional electrical fitting services and fitting items at lower cost. Complete details @ http://www.inframall.net/electrical.phpConstructionKing - Saturday, April 28, 2018 - link
We Inframall do all the Repair And Maintenance Services for building and maintenance home constructions. Visit our website http://www.inframall.net/repair-maintenance.php to learn more.ConstructionKing - Saturday, April 28, 2018 - link
Find and choose the top best interior designers and architects for your home or commercial building constructions from our certified associate platform. We undertake all interior works at minimum rate in Ernakulam. http://www.inframall.net/interior-design.phpConstructionKing - Saturday, April 28, 2018 - link
Inframall in Ernakulam undertake all the painting works including material supply and man power. We aggregates all the professionals and material suppliers under single roof and makes the customers easy to completes their construction works.http://www.inframall.net/painting.phplyeoh - Saturday, April 28, 2018 - link
After so many decades and with so many transistors available can't Intel/AMD add better more efficient and accurate ways of getting time (e.g. monotonic time and also "real time")? They keep adding YetAnotherSIMD but how about stuff like this? For bonus points add efficient easy ways to set (and cancel) interrupts that will trigger after certain times.patrickjp93 - Monday, April 30, 2018 - link
It's a total misconception that any circuit can get these times more accurately and efficiently at the same time. There's no demand for it on a broad scale either.Bensam123 - Saturday, April 28, 2018 - link
This has been something the gaming community has known about and discussed for years, you can find posts about HPET all over popular forums. However, they aren't backed by any sort of meaningful data and much like with core parking and vpns for gaming, legitimate hardware testing websites generally have either turned up their nose at it or disregarded them as complete snake oil.I'm glad HPET is finally getting looked at in real depth. What was discussed here was just their real effect on quantifiable metrics, such as benchmarks, but what gamers discuss is their impact on stuttering or microstutters as well as hit registration in netcode (which is extremely dependent on timing). That wasn't looked at at all here. While this discussion was mainly focused on the difference between results between other websites and Intel vs AMD, I think that's not quite the right way to approach this. Rather it should be looked at the effect of timers in general on gaming as a whole.
The statement that you guys received from Intel pretty much makes it blatantly clear that no one really had any idea what was going on with the timers over there or that they really had a big impact on anything outside of synthetic results. Microsoft has just put band aids on top of band aids to keep everything running and it got to the point where it's no longer transparent to people who are buying hardware, people who are making hardware, or people who are developing software (beyond a few very niche groups) how they all interlock and intermingle with each other. I didn't until I did some digging and required even more to learn there were more timers besides HPET between multiple and sometimes vaguely related forum posts.
A higher resolution timer should be good, especially for video games, but the impact it has on the system because of it's crude and backwards implementation has made it such that it's basically just a synthetic cog that can't be used in practice. It makes you wish there was a solution that just put everything on the same page, hardware and software. I'm sure game developers who just lease a engine and then essentially make a mod have no idea what's going on here and developers who actually make the engine (Dice, Epic, Crytek) may not even know there is a problem in the first place.
I do hope Anand takes this a bit further with frame time benchmarks and maybe FCAT designed to look specifically at this. As was mentioned in this article, implementations even seem different across different motherboards, which is a very, very bad thing and should also be looked at. There is a lot of room for future articles here focused around this specific issue until there is some remote amount of standardization.
If you're looking for more interesting things to test - almost no one tests net code in video games, with the handful of people who do making arbitrary comparisons and really having no tools or benchmarks to work with, even though video games (especially highly competitive ones) are extremely dependent on such things, especially when you get into the top 10% of the player base. People just assume gaming code 'works' and that magical part of games are all created equal when it couldn't be any further from the truth. Net code is literally trying to hold together what is essentially a train wreck while trying to mask it from it's users as best as possible. Some games do it a lot better then others.
samal90 - Sunday, April 29, 2018 - link
I don't understand. The review should be made not based on how AMD or intel wants you to setup your PC, but how it ships by default. Most users don't fiddle with BIOS and don't understand HPET.The review should be done with default settings. You install the CPU, install windows and bam...review. Fiddling with options might be good as a side project to test stuff, but in most cases, people use it as is.
patrickjp93 - Monday, April 30, 2018 - link
Anandtech also caters to enthusiasts, so there's that to consider.tucode - Monday, April 30, 2018 - link
yes, but default (out of box) settings are more useful and informative for basic hw comparison benchmarksthen one can add extra 'tweaked' benchmark data same way as OC results
also it would be nice to share/provide the what tweaks have been used (e.g. regedit file, etc)
as for a lot of extra work arguments/comments, sure, but that's what makes the difference, and people who are interested (=enthusiasts) will happily look into it...and appreciate the extra effort
whetstone - Wednesday, May 2, 2018 - link
I have not read 23 pages of comments but …Concerning the RTC, you should take a look at GetSystemTimeAdjustment/SetSystemTimeAdjustment (windows) and hwclock (Linux).
x0fff8 - Friday, May 4, 2018 - link
Just curious, are the gaming benchmar results in the ryzen deep dive post updated yet today?MDD1963 - Monday, May 7, 2018 - link
So were there more corrected/confirmed results coming, or , just this vague rehash of 'we blame chipset and WIndows timing'on our crappy results' fiasco for a summary :)jameswhatson - Wednesday, May 9, 2018 - link
However, it sadly appears that reality diverges from theory – sometimes extensively so – and that our CPU benchmarks for the Ryzen 2000-series review were caught in the middle.Open your web browser be it on the computer or mobile and, search for mx player apk download. Obviously, you are going to get tons of links there. But I recommend going with the mxplayerdownload.co one.
https://mxplayerdownload.co/
Gamma9 - Saturday, May 12, 2018 - link
Yeah thats what I thought it would be after watching AdoredTV's video on this.Gamma9 - Saturday, May 12, 2018 - link
Ah no, no... I didn't think it was this I thought it would just be that the timers would be running slower on the AMD hardware.maverickmad - Tuesday, May 15, 2018 - link
The HPET, despite its name, is not more accurate. The TSC timer is accurate to CPU clock-cycle precision, which is usually more than two orders of magnitude better than the HPET. shorturl.at/hzBFGmaverickmad - Tuesday, May 15, 2018 - link
The TSC timer is accurate to CPU clock-cycle precision, which is usually more than two orders of magnitude better than the HP<a href="http://www.heenakhan.com">E</a>Tx58haze - Wednesday, August 15, 2018 - link
I wanted to know how to properly disable the meltdown patches on Cpu Ryzen?Tried via Windows, but i cannot disable the meltdown, just the spectre
Tried already:
FeatureSettingsOverride 3
FeatureSettingsOverrideMask 3
And this is what Inspectre says when open:
Photo upload with Light-shoot tool
http://prntscr.com/kj4f47
And this is the report via get-processpeculationcontrolsettings at power shell
https://prnt.sc/kj4fdj
I don't have a server just a regular guy that want to get the most performance form my cpu without any patch apply via Windows, is there a way to disable the cpu meltdown?
tried also disabling the set process mitigation system and the cpu meltdown still not disabled.. x.x