Hot Chips 31 (2019) Programme Announced: Zen, Navi, POWER, Lakefield, Gen-Z, Turing, Lisa Su Keynoteby Ian Cutress on May 16, 2019 1:00 PM EST
There are two trade shows every year that I love. Computex in June is great, because the scale of the industry it covers, and Taipei is a wonderful location. Hot Chips in August is the other show, which is amazing for the level of depth it provides on the latest technology, as well as upcoming releases. This year the list of presentations for Hot Chips is almost overwhelming, and we’re looking forward to attending.
Hot Chips is now in its 31st version, and is the site of a wide range of technical disclosures on the latest chip logic innovations. Almost all the major chip vendors and IP licensees involved in semiconductor logic designs take part: Intel, AMD, NVIDIA, Arm, Xilinx, and IBM are on the latest list, and developers of large accelerators like Fujitsu and NEC last year. Even the foundry businesses take part, as well as some of the big cloud providers: Google last year showed its security chips, that no-one else will ever see. There are notable absences from the likes of Apple, who despite being on the Committee, last presented at the conference in 1994.
We’ve come to expect Hot Chips to be the place for microarchitecture overviews as well as a wider ecosystem examination over the finer points of hardware that will hit a large number of us behind the scenes. There have also been portions of the conference dedicated to new designs, open-source hardware, and security vulnerabilities. This year sees a large and wide range of impressive headlining talks planned.
Hot Chips 31 will run on 18-20th of August 2019, and be held at Stanford University. The first day, Sunday 18th, is typically a tutorial day. The detailed talks start on the 19th.
|Hot Chips 31 (2019) Time Table|
|09:00||General Purpose Compute||AMD||Zen 2||Matisse|
|Arm||A Next-Gen Cloud-to-Edge Infrastructure SoC Using the Arm Neoverse N1 CPU and System Products||Neoverse N1|
|IBM||IBM's Next Generation POWER Processor||POWER9 with IO|
|11:00||Memory||Upmem||True Processing In Memory with DRAM Accelerator|
|Princeton||A Programmable Embedded Microprocessor for Bit-Scalable In-Memory Computing|
|Intel||Intel Optane||Optane DCPMM|
|13:45||Keynote||AMD||Dr. Lisa Su||Delivering the Future of High-Performance Computing with System, Software and Silicon Co-Optimization|
|14:45||Methodology and ML Systems||Stanford||Creating an Agile Hardware Flow|
|MLPerf||MLPerf: A Benchmark Suite for Machine Learning From an Academic-Industry Co-operative|
|Zion: Facebook Next-Generation Large-Memory Unified Training Platform|
|16:45||ML Training||Huawei||A Scalable Unified Architecture for Neural Netowrk Computing from Nano-Level to High Performance Computing||Huawei Da Vinci|
|Intel||Deep Learning Training at Scale - Spring Crest Deep Learning Accelerator||Spring Crest|
|Cerebras||Wafer Scale Deep Learning|
|Habana||Habana Labs Approach to Scaling AI Training|
The first day kicks off with a general purpose compute session with AMD, Arm, and IBM. AMD’s talk looks into its newest Zen 2 microarchitecture, which will be powering Matisse based ‘Ryzen 3000’ line of desktop processors as well as ‘Rome’ server processors. We’re not expecting much new from this presentation, given that it’s expected for AMD to have released the products to market before mid-August. Following AMD is Arm, with its Neoverse N1 platform we reported that was announced several weeks ago. IBM’s talk will be interesting, discussing the latest POWER processor, which is likely to be its optimized version of POWER9 focusing on IO support.
The second session is about memory, with the first two talks from Upmem and Princeton on in-memory compute. In-memory compute is seen as a fundamental frontier to performance: why shuttle data back and forth from memory when simple ALU operations can be done local to where the memory is stored – the goal of this is to save power and potentially speed up compute times, removing memory and DRAM accesses from being bottlenecks. The third talk in this session with be Intel and its new Optane DC Persistent Memory products, which we’re actually testing in-house right now.
Following lunch comes the first keynote presentation of the event, from AMD’s CEO Dr. Lisa Su. The talk will focus on how AMD is achieving its next generation of performance gains with Zen 2 and Navi, all built around 7nm. This presentation is going to be more of an overview rather than offer disclosures, though we might see a hint of a roadmap or two. A number of key AMD partners are expected to be in attendance, and scoring a keynote at Hot Chips along with keynotes at CES and Computex this year is underlining a headline year in AMD’s history.
The Methodology and ML (Machine Learning) Systems session features a talk from MLPerf, an organization building an industry standard machine learning benchmark suite. Until this point MLPerf has been in early revisions, and hopefully the additional on-side discussions will drive the suite forward. The Facebook talk about its training platform also looks interesting, not only because whenever Facebook does something it does it at scale.
The final session of the day is the Machine Learning Training session. Most of the time we end up writing about inference, but training still represents half of the revenue of the machine learning hardware industry (according to Intel at its Investor Day recently). Huawei is set to disclose information about its Da Vinci platform, while Intel will also disclose details about its Spring Crest family. A couple of IP related companies are also set to present here.
|Hot Chips 31 (2019) Time Table|
|08:30||Embedded and Auto||Cypress||CYW89459: High Performance and Low Power Wi-Fi and BT5.1 Combo Chip|
|Alibaba||Ouroboros: A WaveNet Inference Engine for TTS Applications on Embedded Devices|
|Tesla||Compute and Redundancy Solution for Tesla's Full Self Driving Computer||Tesla FSD|
|10:30||ML Inference||MIPS / Wave||The MIPS Triton AI Processong Platform||Triton AI|
|NVIDIA||A 0.11 pJ/Op, 0.32-128 TOPS Scalable MCM-Based DNN Accelerator||NVIDIA NPU|
|Xilinx||Xilinx Versal / AI Engine||Versal|
|Intel||Spring Hill - Intel's Data Cetner Inference Chip||Spring Hill|
|13:45||Keynote||TSMC||Dr. Phillip Wong||What Will The Next Node Offer Us?|
|14:45||Interconnects||HPE||A Gen-Z Chipset for Exascale Fabrics||Gen-Z|
|Ayarlabs||TeraPHY: A Chiplet Technology for Low-Power, High Bandwidth Optical I/O||TeraPHY|
|16:15||Packaging and Security||Intel||Hybrid Cores in a Three Dimensional Package||Lakefield|
|Tsinghua||Jintide: A Hardware Security Enhanced Server CPU with Xeon Cores||Jintide|
|17:45||Graphics and AR||NVIDIA||RTX ON: The NVIDIA Turing GPU Architecture||Turing|
|AMD||7nm 'Navi' GPU||Navi|
|Microsoft||The Silicon at the Heart of Hololens 2.0||Hololens|
The second day keeps the juices flowing with a number of hot topics.
The first session, labelled Embedded and Auto, should be really interesting due to Tesla presenting information about its new Full-Self Driving chip (FSD), which would have been developed under Jim Keller during his time there. Tesla’s disclosures at its most recent event were detailed, so we hope that Tesla is exposing more information about the chip at Hot Chips. Alibaba also has a talk in this session, focused around its embedded inference engine.
The Machine Learning Inference session is another ML session of the event, further reinforcing the direction of compute towards ML over the next decade. During this session we should learn about NVIDIA’s dedicated multi-chip module inference design, showcasing technologies beyond its base GPU heritage. Xilinx will expose more about its Versal platform, and Intel will talk about Spring Hill, its data center inference chip that we already know contains Ice Lake cores and is being built in partnership with Facebook.
After lunch, TSMC has the keynote for the second day. Dr. Phillip Wong will discuss future technology nodes, which is apt given that both TSMC and Samsung have recently had events where the next generations of their foundry processes have been discussed.
Following the keynote is the Interconnects section. Hot Chips actually already has a sister conference called Hot Interconnects, which this year is a few days before Hot Chips, and as a result the interconnects section at Hot Chips only has two speakers. The more interesting talk from the outset is from HPE (Hewlett Packard Enterprise) who is presenting its new Gen-Z chipset and adaptor for large scale fabric implementations. Gen-Z is one of the future interconnects competing with the likes of CCIX, CXL, and others.
For packaging and security, this session looks super interesting. Intel is set to discuss its Lakefield processor which uses its new Foveros packaging technology – Intel has promised to deliver Lakefield in products by the end of the year, but we’re hoping that the Hot Chips talk will offer a deeper disclosure than what we’ve heard previously. The second talk of the session is from Tsinghua University in China, with its new Jintide CPU. If you’ve never heard of Jintide, I don’t blame you: me neither. But the title of the talk indicates that it is a custom CPU design using Intel’s Xeon cores, indicating a custom SoC platform built in partnership with Intel. Very interesting indeed!
Hot Chips usually ends in a bang, so at the end of a long day, we’re going to hear about the latest graphics and AR technologies. NVIDIA is going to talk Turing, which has been disclosed so we’re not expecting anything new there, but AMD is set to talk about Navi. We are expecting AMD to release Navi between now and Hot Chips, so there’s a chance here that there will not be nothing new from AMD either. But the final talk should yield plenty of new information: Microsoft is going to talk about the silicon inside its new Hololens 2.0 design. I’m looking forward to it!
The Hot Chips conference is set to run 18-20th of August. I’ll be there, hopefully live blogging as many sessions as possible.
- AnandTech at Hot Chips 30: Our 2018 Show Coverage
- Hot Chips 2018: Google Titan Live Blog
- Hot Chips 2018: Nanotubes as DRAM from Nantero
- Hot Chips 2018: IBM Power9 Scale Up CPU
- Hot Chips 2018: NEC Vector Processor Live Blog
- Hot Chips 2018: Intel on Graphics Live Blog
- Hot Chips 2018: Samsung’s Exynos-M3 CPU Architecture Deep Dive
- Intel at Hot Chips 2018: Showing the Ankle of Cascade Lake
- Hot Chips: IBM's Next Generation z14 CPU Mainframe Live Blog
- Hot Chips: Google TPU Performance Analysis Live Blog
Post Your CommentPlease log in or sign up to comment.
View All Comments
Irata - Friday, May 17, 2019 - linkWhich security issues ? Did not see anything on Anandtech or in the staff tweets about this, so must not be anything important and definitely far less serious than "Ryzenfall"
GreenReaper - Friday, May 17, 2019 - linkArs covered it a few days ago:
AMD claims they are, as far as they can see, unaffected:
And yeah, I've been wondering why that was, especially since it concerns all but the latest CPUs...
I like to think that AnandTech is waiting for a test to be done so they can show impact benchmarks. There's probably only one or two staff who can do that; they may be elsewhere, plus it takes time.
Irata - Friday, May 17, 2019 - linkI was being sarcastic :) Phoronix have run test and performance is down - on top of the already peformance degrading previous patches. And this with HT still enabled....
As for not concerning the latest Intel CPU - reseachers do not seem to agree with Intel and Intel was using rather vague language. Perhaps the fact that most of their 9th gen core CPU come with HT disabled is the fix ?
Lord of the Bored - Saturday, May 18, 2019 - linkI thought you were HStewart at first.
Irata - Saturday, May 18, 2019 - linkWill make sure to add the [/sarcasm] tag next time :-)
Rudde - Friday, May 17, 2019 - linkI've understood that Jim Keller didn't lead the developement of the FSD chip, but led some other project. He is neither mentioned in any patents nor was he mentioned at the autonomy day presentations.
peevee - Monday, May 20, 2019 - linkI am more interested in "Upmem True Processing In Memory with DRAM Accelerator" part. Fanboy f(igh)(ar)ts between outdated CPU architectures is boring.