Ever since its arrival in the ultra mobile space, NVIDIA hasn't really flexed its GPU muscle. The Tegra GPUs we've seen thus far have been ok at best, and in serious need of improvement at worst. NVIDIA often blamed an immature OEM ecosystem unwilling to pay for the sort of large die SoCs necessary in order to bring a high-performance GPU to market. Thankfully, that's all changing. Earlier this year NVIDIA laid out its mobile SoC roadmap through 2015, including the 2014 release of Project Logan - the first NVIDIA ultra mobile SoC to feature a Kepler GPU. Yesterday in a private event at Siggraph, NVIDIA demonstrated functional Logan silicon for the very first time.

NVIDIA got Logan silicon back from the fabs around 3 weeks ago, making it almost certain that we're dealing with some form of 28nm silicon here and not early 20nm samples.

NVIDIA isn't talking about CPU cores, but it's safe to assume that Logan will be another 4+1 arrangement of cores - likely still based on ARM's Cortex A15 IP (but perhaps a newer revision of the core). On the GPU front, NVIDIA confirmed our earlier speculation that Logan includes a single Kepler SMX:

One Kepler SMX features 192 CUDA cores. NVIDIA isn't talking about shipping GPU frequencies either, but it did provide this chart to put Logan's GPU capabilities into perspective:

Don't get too excited as we're looking at a comparison of GFLOPS and not game performance, but the peak theoretical ALU bound performance of mobile Kepler should exceed that of a Playstation 3 or GeForce 8800 GTX (memory bandwidth is another story however). If we look closely at NVIDIA's chart and compare mobile Kepler to the iPad 4, we get a better idea of what sort of clock speeds NVIDIA would need to attain this level of performance. Doing some quick Photoshop estimation it looks like NVIDIA is claiming mobile Kepler has somewhere around 5.2x the FP power of the PowerVR SGX 554MP4 in the iPad 4 (76.8 GFLOPS). That works out to be right around 400 GFLOPS. With a 192 core implementation of Kepler, you get 2 FLOPS per core or 384 FLOPS per cycle. To hit 400 GFLOPS you'd need to clock the mobile Kepler GPU at roughly 1GHz. That's certainly doable from an architectural standpoint (although we've never seen it done on any low power 28nm process), but it's probably a bit too high for something like a smartphone.

NVIDIA didn't want to talk frequencies but they did tell me that we might see something this fast in some sort of a tablet. I suspect that most implementations will be clocked significantly lower. Even at half the frequency though, we're still talking about roughly Playstation 3 levels of FP power out of a mobile SoC. We know nothing of Logan's memory subsystem, which obviously plays a major role in real world gaming performance but there's no getting around the fact that Logan's Kepler implementation means serious business. For years we've lamented NVIDIA's mobile GPUs, Logan looks like it's finally going to change that.

API Support and Live Demos
 

Unlike previous Tegra GPUs, Kepler is a fully unified architecture and OpenGL ES 3.0, OpenGL 4.4 and DirectX 11 compliant. The API compliance alone is a huge step forward for NVIDIA. It's also a big one for game developers looking to move more seriously into mobile. Epic's Tim Sweeney even did a blog post for NVIDIA talking about Logan's implementation of Kepler and how it brings feature parity between PCs, next-gen consoles and mobile platforms. NVIDIA responded in kind by running some Unreal Engine 4 demos on Android on a Logan test platform. That's really the big story behind all of this. With Logan, NVIDIA will bring its mobile GPUs up to feature parity with what it's shipping in the PC market. Game developers looking to port games between console, PC, tablet and smartphone should have an easier job of doing that if all platforms supported the same APIs. Logan will take NVIDIA from being very behind in API support (with no OpenGL ES 3.0 support) to the head of the class.

NVIDIA took its Ira demo, originally run on a Titan at GTC 2013, and got it up and running on a Logan development board. Ira did need some work to make the transition to mobile. The skin shaders were simplified, smaller textures are used and the rendering resolution is dropped to 1080p. NVIDIA claims this demo was done in a 2 - 3W power envelope.

The next demo is called Island and was originally shown on a Fermi desktop part. Running on Logan/mobile Kepler, this demo shows OpenGL 4.3 and hardware tessellation working.

The development board does feature a large heatspreader, but that's not too unusual for early silicon just out of bring up. Logan's package size should be comparable to Tegra 4, although the die size will clearly be larger. The dev board is running Android and is connected to a 10.1-inch 1920 x 1200 touchscreen.

Power Consumption & Final Words
Comments Locked

141 Comments

View All Comments

  • Krysto - Wednesday, July 24, 2013 - link

    And you think Kepler isn't? Kepler is scalable as far as Nvidia's highest end chips. Plus that argument is a pretty weak one, since what matters is how much can you scale under a certain power envelope - not how much in TOTAL. That's completely irrelevant. It just means Series 6 will - eventually (years from now) get to 1 TF. But so will other chips.

    Also Kepler IS something special. Here's why - full OpenGL 4.4 support. Imagination, Qualcomm and ARM have barely gotten around to implement OpenGL ES 3.0, and even that took them until now basically, to implement the proper drivers for them, and obtain the certification.

    The point is, that the same performance level, Kepler optimized games should look a lot better than games on any other chip.
  • Krysto - Wednesday, July 24, 2013 - link

    Oh, and even Intel barely got around to implement OpenGL 4.0 in Haswell - the PC chip. So don't expect the others to support the full and latest OpenGL 4.4 anytime soon.
  • 1Angelreloaded - Wednesday, July 24, 2013 - link

    Incase you don't know this already, sandy/ivy/haswell integrated graphics are absolutely terrible, in most case when used aside your dedicated GPU it lowers performance instead of increasing it. On its own it might be fine for a few things here and there, but even that is terrible.
  • ExarKun333 - Wednesday, July 24, 2013 - link

    Your GPU knowledge is laughable. Integrated GPUs are disabled when running a dedicated. You obviously are a troll or are plain ignroant about all CPU/GPU issues. others please ignore this user's posts...
  • Flunk - Wednesday, July 24, 2013 - link

    Actually, you're wrong. In Nvidia Optimus/AMD Enduro the discrete GPU draws in the integrated graphic's frame buffer even when in discrete mode. Also, on desktops it is possible to enable your integrated graphics and discrete GPU at the same time to support more monitors.
  • lmcd - Wednesday, July 24, 2013 - link

    Difference is only present on Enduro. Optimus is almost identical. The performance hit with multiple monitors on multiple devices is likely a Windows thing and a framebuffer sync thing. Not an actual problem with Intel graphics.

    So I know we know Exar is wrong, but his point that Ang's information is irrelevant is in fact correct, for this mobile situation anyway.
  • happycamperjack - Wednesday, July 24, 2013 - link

    If you take a look at Microsoft's Surface Pro's benchmark number, you'd be shocked by how many times faster its GPU is compared to latest iPad and Androids phones. Because I was!
  • texasti89 - Thursday, July 25, 2013 - link

    "many times faster" is not the right metric .. they all about the same when you look at the perf/watt.
  • happycamperjack - Friday, July 26, 2013 - link

    I'm just dismissing the guy who's saying that integrated graphics from intel is absolutely terrible. It's certainly not terrible compared to the GPU in top mobile SoC.
  • Refuge - Thursday, July 25, 2013 - link

    Good sir, android (90% of what this will be running I would assume) JUST NOW with 4.3 which is yet to appear on a device (Nexus 7 R2 is first) just now implemented OpenGL ES 3.0 support.

    Look, being the first to support new API's is great, but being the only one gains you nothing because nobody is going to program for the 1% that's just bad business.

Log in

Don't have an account? Sign up now