Hot Chips 2020 Live Blog: Microsoft Xbox Series X System Architecture (6:00pm PT)
by Dr. Ian Cutress on August 17, 2020 9:00 PM EST- Posted in
- CPUs
- Microsoft
- GPUs
- Xbox
- Live Blog
- Xbox Series X
- Hot Chips 32
09:04PM EDT - Final talk of the day is Xbox Series X System Architecture!
09:05PM EDT - Azure Silicon Architecture Team
09:06PM EDT - 3.8 GHz Zen2 Server cores
09:06PM EDT - DXR, VRS, Machine LEarning Acceleration
09:07PM EDT - 14 Gbps GDDR6, 320-bit = 560 GB/s
09:07PM EDT - Hardware accelerators in blue
09:07PM EDT - 120 Hz support, VRR, Xbox Velocity Architecture for MSP Crypto/Decomp on NVMe SSD
09:07PM EDT - Acoustic acceleration
09:07PM EDT - HSP/Pluton RoT - security
09:08PM EDT - 360.4mm2 TSMC N7 enhanced
09:08PM EDT - 15.3B transistors
09:08PM EDT - 2 four core CPU clusters
09:08PM EDT - 10 GDDR6 controllers
09:09PM EDT - GPU 12 FLOPs
09:10PM EDT - AVX256 gives 972 GFLOP over CPU
09:10PM EDT - 16 GB of GDDR6 total
09:11PM EDT - >Says Zen2 server class, but L3 cache is mobile class?
09:11PM EDT - Display processing is kept off the shader engines
09:11PM EDT - IO hub supports PCIe 4.0 x8
09:11PM EDT - Operates on linear light values, not gamma light values
09:12PM EDT - ALLM - Auto Low Latency Mode
09:13PM EDT - Increased die cost on this APU over previous generation
09:13PM EDT - Significantly more expensive!
09:13PM EDT - Trade off
09:13PM EDT - MS created Audio engines - 3 engines, CFPU2, MOVAD, LOGAN
09:13PM EDT - CFPU2 for audio convolution, FFT, reverb
09:14PM EDT - such as Project Acoustics to model 3D audio sources
09:14PM EDT - MOVAD - hyper real-time hardware audio decoder
09:14PM EDT - >300x channels decode at once
09:14PM EDT - best trade off codec, so made in hardware
09:15PM EDT - >100dB signal noise ratio
09:15PM EDT - HW realtime real-time matched to decode based on sampling
09:15PM EDT - Logan is offering also better offload in traditional modes
09:15PM EDT - HSP/Pluton: Root of trust, crypto, SHACK (crypto keys)
09:15PM EDT - MSP supports 5 GB/s high-bw crypto on the SSD
09:16PM EDT - DRAM to SSD balance needed for refill
09:16PM EDT - Load times are always increasing unless SW-to-DRAM bw increases, hence NVMe SSDs
09:17PM EDT - Sampler Feedback System
09:17PM EDT - New metadata for texture portions to pre-load texture caches
09:17PM EDT - Direct Storage
09:17PM EDT - Manages data locations ahead of developer
09:18PM EDT - Distinct savings for most detail texture maps savings
09:18PM EDT - Lossless MS XVA 2:1 compression
09:19PM EDT - Need big GPU - get the tech out of the way
09:19PM EDT - Need raw ops/second increase within PPA and cost
09:19PM EDT - 12.2 supported in HW
09:19PM EDT - 26 active dual CUs (52 CUs)
09:20PM EDT - single geometry supports primatives
09:20PM EDT - DIrectly snoop CPU caches
09:20PM EDT - Dual stream multi-core command processor
09:20PM EDT - Double rate 16-bit math
09:20PM EDT - single cycle issue rate to reduce stalls
09:21PM EDT - CUs have 25% better perf/clock compared to last gen
09:21PM EDT - GPU Evolution: FLOPS have outpaced mem space and BW
09:21PM EDT - Screen pixels has increased in th emiddle
09:22PM EDT - How to fill pixels better without blowing power budget
09:22PM EDT - VRS
09:22PM EDT - supports up to 2x2
09:22PM EDT - 10-30% perf gain for tiny area cost
09:23PM EDT - Full edge detail
09:23PM EDT - SFS
09:24PM EDT - Previously very slow to enable
09:24PM EDT - Two new HW structures for tile-by-tile management for in-DRAM textures
09:25PM EDT - clamps LOD
09:26PM EDT - Tilemaps should stay on die for best latency
09:27PM EDT - SFS: 60% IO/Mem savings for small die area cost
09:27PM EDT - DX Ray Tracing Accel
09:27PM EDT - Not a complete replacement - RT can be applied selectively based on traditional models
09:28PM EDT - Custom ray-triangle units
09:28PM EDT - ML inference
09:29PM EDT - Two virtualized command streams - two VMs
09:29PM EDT - Main title OS vs system OS
09:29PM EDT - 32b HDR rendering, blending display
09:29PM EDT - Optimized games. Unable to show at the event
09:30PM EDT - Q&A Time
09:31PM EDT - Q: TDP? A: Not commenting. There's so many things that are involved in the TDP, and tradeoffs. We're not really able to descibe it without describing it in a technical environemtn
09:32PM EDT - Q: Can you stream into the GPU cache? A: Lots of programmable cache modes. Streaming modes, bypass modes, coherence modes.
09:33PM EDT - Q: Coherency CPU and GPU? A: GPU can snoop CPU, reverse requires software
09:35PM EDT - Q: Are you happy as DX12 as a low hardware API? A: DX12 is very versatile - we have some Xbox specific enhancements that power developers can use. But we try to have consistency between Xbox and PC. Divergence isn't that good. But we work with developers when designing these chips so that their needs are met. Not heard many complains so far (as a silicon person!). We have a SMASH driver model. The games on the binaries implement the hardware layed out data that the GPU eats directly - it's not a HAL layer abstraction. MS also re-writes the driver and smashes it together, we replace that and the firmware in the GPU. It's significantly more efficient than the PC.
09:35PM EDT - Q: Is link between CPU and GPU clocks? A: Hardware is independent.
09:36PM EDT - Q: Is the CPU 3.8 GHz clock a continual or turbo? A: Continual.
09:36PM EDT - Continual to minimize variance
09:37PM EDT - Q: TSMC 7nm enhanced, is it N7P, N7+, or something else? A: It's not base 7nm, it's progressed over time. Lots of work between AMD and TSMC to hit our targets and what we needed
09:38PM EDT - Q: Says Zen 2 is server class, but you use L3 mobile class? A: Yeah our caches are different, but I won't say any more, that's more AMD.
09:39PM EDT - Q: With 20 channels GDDR6, is that really cheaper than 2 stacks HBM? A: We're not religious about which DRAM tech to use. We needed the GPU to have a ton of bandwidth. Lots of channels allows for low latency requests to be serviced. HBM did have an MLC model thought about, but people voted with their feet and JEDEC decided not to go with it.
09:40PM EDT - Q: GDDR6 on sides, not bottom? A: bottom is power, how board interfaces with the chip. GPU has high EDC and currents, and you need clean copper to deliver that. With that much current you need to leave that space unless you use super expensive packaging. We did it the cost efficient way
09:41PM EDT - Q: Why do you need so much math for audio processing? A: 3D positional audio and spatial audio and real world spaces if you 300-400 audio sounds positional in 3D and want to start doing other effects on all samples, it gets very heavy compute. Imagine 20 people fighting in a cave and reflections with all sorts of noises
09:43PM EDT - That's a wrap and we're done for today. Come back tomorrow at 8:30am PT to talk about FPGAs. It's 2:44am here in the UK, time to go to bed.
58 Comments
View All Comments
Spunjji - Friday, August 21, 2020 - link
There's quite a bit more non-core logic in this chip than Renoir - and a much larger proportion of the die is the GPU - so it's not necessarily very helpful to compare the two.wujj123456 - Tuesday, August 18, 2020 - link
I laughed when I saw "GPU 12 FLOPs". (Slides are correct.) Then I started to think about what 12 FLOPs feels like and realized 12 FLOPs is still way faster than what I can do. Damn computers.liquid_c - Tuesday, August 18, 2020 - link
“Zen2 server class CPU cores”This line doesn’t belong in a marketing slide for a gaming console.
darkz3r0 - Tuesday, August 18, 2020 - link
I think the server class it's meant to be more stable/reliable in the long term, cpu and gpu nowdays implement to many oc techniques out the box that's why oc these days sucks a lot, all we have done this decade is increment power and frequency, look how intel ended it up, if you look amd zen2 cpus the 3700x has the same core count but it can boost to 4.4ghz in paper (in reality using a good mb and ram combination numbers are more likely 4.1~4.2) so the epyc or xeon side of cpus are clocked way lower and that's why they aren't targeted for gaming, that's why my guess that info from amd shows server class because it's not boosted like let's say ps5, and that's ps5 requires a larger cooler solution, it's just doesn't make any sense, I wish people stop consuming gaming stuff, I rather like to have a line straight than a zigzag, but that's me I like efficiency instead or performance burstssmithg5 - Tuesday, August 18, 2020 - link
It does ECC with the GDDR6 memory (a custom thing). They are going to use this same SoC for xCloud Servers in Azure, and they designed it with use in the data center in mind - using a virtualized display controller for instance. They’re going to run 4 XBox One instances from a single chip.I think it’s a reasonable thing to say.
Spunjji - Wednesday, August 19, 2020 - link
Remember, it's a comparison to their previous products which used tablet-class CPU cores. I think darkz3r0 is right about them emphasising the stable clocks, too.KimGitz - Tuesday, August 18, 2020 - link
Microsoft say they were targeting a 4-6x improvement of the Gpu in the Xbox One at the 1x power. Microsoft have an impressive console. What remains now so that we can finally call it is price. Kudosdarkz3r0 - Tuesday, August 18, 2020 - link
yeah looking at the memory speeds (which is more important than ssd speeds) for graphics this console will deliver pc class graphics, I like xbox over ps4 as a device, even the og xbox one had better quality cables over ps4, since xbox one s/x the quality was above, my xbox one x beats my ps4 pro, I had to purchase a new ps4 pro with newer power supply because it was simply loud that even wearing headset I was bothered, never had noise issues with xbox one x, the quality of xbox 1x is worthy the extra $100 over ps4 pro, it has better quality audio, better outputs (disney+ can do HDR where ps4 pro can't) in games xbox 1x is not only performing faster fps but the quality of it, I had hard time getting the colors in my ps4 pro right later they added a hdr calibration setting, still some games looks washed out, if you are using a ips panel or bad quality tv probably it wont matter since the gamma is very low, but when using a fald tv with a good panel (oled or qled quality) xbox one x without need of tweak looked more richer so movies are more stunning on it, that's why I adore my xbox and I have no doubt they will make series x a good device it's just needs gamesEliadbu - Tuesday, August 18, 2020 - link
I agree Microsoft learnt their lesson with Xbox one x waited with its release but made much better device than PS4 pro. I bought the PS4 pro because of the games - I got a good PC to enjoy games I could play on it but I wanted the exclusives so I got the PS4 pro. for the question what is more important for performance of GPU traditionally only the memory speed matters because assets and textures are preloaded but with the new consoles and the ability to load on the fly things might get more complicated.anyway for me PS 5 is almost a guarantee buy over Xbox mostly because of games and ps4 games backwards compatible.
MojArch - Tuesday, August 18, 2020 - link
Wow! so much fan boyisem!First off the Ps4 cable has good quality which wonders me how fan boyish some one can be or might never owned a PS4 pro!
Second i have Ps4 pro and never had as bad as sound issue you are describing, i am not denying it was a bit load but nothing like you said.( might needed a little bit clean up which obviously you have never done)
You must never owned a PS4 pro because it clearly can do HDR and if you had PS4 pro and got washed out looking game is just because your TV is not certified for HDR, so buy one that has HDR certificate and voila every thing looks amazing!
So i suggest try one game with actual HDR certified TV on PS4pro to see what a real HDR is instead of running around and telling every one lies!