OoOE

You’re going to come across the phrase out-of-order execution (OoOE) a lot here, so let’s go through a quick refresher on what that is and why it matters.

At a high level, the role of a CPU is to read instructions from whatever program it’s running, determine what they’re telling the machine to do, execute them and write the result back out to memory.

The program counter within a CPU points to the address in memory of the next instruction to be executed. The CPU’s fetch logic grabs instructions in order. Those instructions are decoded into an internally understood format (a single architectural instruction sometimes decodes into multiple smaller instructions). Once decoded, all necessary operands are fetched from memory (if they’re not already in local registers) and the combination of instruction + operands are issued for execution. The results are committed to memory (registers/cache/DRAM) and it’s on to the next one.

In-order architectures complete this pipeline in order, from start to finish. The obvious problem is that many steps within the pipeline are dependent on having the right operands immediately available. For a number of reasons, this isn’t always possible. Operands could depend on other earlier instructions that may not have finished executing, or they might be located in main memory - hundreds of cycles away from the CPU. In these cases, a bubble is inserted into the processor’s pipeline and the machine’s overall efficiency drops as no work is being done until those operands are available.

Out-of-order architectures attempt to fix this problem by allowing independent instructions to execute ahead of others that are stalled waiting for data. In both cases instructions are fetched and retired in-order, but in an OoO architecture instructions can be executed out-of-order to improve overall utilization of execution resources.

The move to an OoO paradigm generally comes with penalties to die area and power consumption, which is one reason the earliest mobile CPU architectures were in-order designs. The ARM11, ARM’s Cortex A8, Intel’s original Atom (Bonnell) and Qualcomm’s Scorpion core were all in-order. As performance demands continued to go up and with new, smaller/lower power transistors, all of the players here started introducing OoO variants of their architectures. Although often referred to as out of order designs, ARM’s Cortex A9 and Qualcomm’s Krait 200/300 are mildly OoO compared to Cortex A15. Intel’s Silvermont joins the ranks of the Cortex A15 as a fully out of order design by modern day standards. The move to OoO alone should be good for around a 30% increase in single threaded performance vs. Bonnell.

Pipeline

Silvermont changes the Atom pipeline slightly. Bonnell featured a 16 stage in-order pipeline. One side effect to the design was that all operations, including those that didn’t have cache accesses (e.g. operations whose operands were in registers), had to go through three data cache access stages even though nothing happened during those stages. In going out-of-order, Silvermont allows instructions to bypass those stages if they don’t need data from memory, effectively shortening the mispredict penalty from 13 stages down to 10. The integer pipeline depth now varies depending on the type of instruction, but you’re looking at a range of 14 - 17 stages.

Branch prediction improves tremendously with Silvermont, a staple of any progressive microprocessor architecture. Silvermont takes the gshare branch predictor of Bonnell and significantly increased the size of all associated data structures. Silvermont also added an indirect branch predictor. The combination of the larger predictors and the new indirect predictor should increase branch prediction accuracy.

Couple better branch prediction with a lower mispredict latency and you’re talking about another 5 - 10% increase in IPC over Bonnell.

Introduction & 22nm Sensible Scaling: OoO Atom Remains Dual-Issue
Comments Locked

174 Comments

View All Comments

  • extide - Wednesday, May 8, 2013 - link

    What does Tegra 4 do 1.9Ghz in?
  • Wilco1 - Wednesday, May 8, 2013 - link

    Rumour is that it goes in the next ZTE phone out in a few months.
  • phoenix_rizzen - Tuesday, May 7, 2013 - link

    Note: Tegra 4i does *not* use Cortex-A15 CPUs, it uses Cortex-A9 CPUs! In fact, there's very little "Tegra 4" in the "Tegra 4i" other than the name.
  • lmcd - Monday, May 13, 2013 - link

    And the GPU is closer to the 4 than 3.

    And the process node. Oh yeah, that.
  • name99 - Monday, May 6, 2013 - link

    You're willfully missing the point (and I say that as someone who's not convinced it will be easy for Intel to get ahead).

    What is the value of high speed CPUs in a phone (or for that matter a tablet, or a desktop machine)? For most users it is NOT that it allows some long computation to take a shorter time; rather it's that it provides snappiness --- it allows something that would have taken 1/40th of a sec to take 1/60th of a sec, or that would have taken 1/3rd of a sec to take 1/4 of a sec.
    In this world, where snappiness is what matters, the ability to run your CPU at very high speeds for very short bursts of time (as long as this does not cost you long-run power) is an exceedingly valuable asset. You're being very stupid to dismiss it.
  • dig23 - Thursday, May 9, 2013 - link

    I think so too. This article sounds totally biased :(
  • bkiserx7 - Monday, May 6, 2013 - link

    I wish they would go all out and lay it all on the table. I think it would drive great competition through the industry.
  • Gigaplex - Tuesday, May 7, 2013 - link

    Agreed. And if it does come out "too good", just downclock it and get even better battery life.
  • jamesb2147 - Monday, May 6, 2013 - link

    This is, by far, the worst article I've ever read on Anandtech. I'm pulling you out of my RSS feed specifically because of this article.

    Post when you have specs, guys, not Intel slides. I don't want to see the word "should."
  • Homeles - Monday, May 6, 2013 - link

    AnandTech's architectural analyses are some of the best in the industry. It's your loss.

Log in

Don't have an account? Sign up now