Intel’s Enterprise Extravaganza 2019: Launching Cascade Lake, Optane DCPMM, Agilex FPGAs, 100G Ethernet, and Xeon D-1600
by Ian Cutress on April 2, 2019 1:03 PM ESTToday is the big day in 2019 for Intel’s Enterprise product announcements, combining some products that should be available from today and a few others set to be available in the next few months. Rather than go for a staggered approach, we have it all in one: processors, accelerators, networking, and edge compute. Here’s a quick run-down of what’s happening today, along with links to all of our deeper dive articles, our reviews, and announcement analysis.
Cascade Lake: Intel’s New Server and Enterprise CPU
The headliner for this festival is Intel’s new second-generation Xeon Scalable processor, Cascade Lake. This is the processor that Intel will promote heavily across its enterprise portfolio, especially as OEMs such as Dell, HP, Lenovo, Supermicro, QCT, and others all update their product lines with the new hardware. (You can read some of the announcements here: Dell on AT, Supermicro on AT, Lenovo on AT, Lenovo at Lenovo.)
While these new CPUs do not use a new microarchitecture compared to the first generation Skylake-based Xeon Scalable processors, Intel surprised most of the press at its Tech Day with the sheer number of improvements in other areas of Cascade Lake. Not only are there more hardware mitigations against Spectre and Meltdown than we expected, but we have Optane DC Persistent Memory support. The high-volume processors get a performance boost by having up to 25% extra cores, and every processor gets double the memory support (and faster memory, too). Using the latest manufacturing technologies allows for frequency improvements, which when combined with new AVX-512 modes shows some drastic increases in machine learning performance for those that can use them.
Intel Xeon Scalable | |||||
2nd Gen Cascade Lake |
AnandTech | 1st Gen Skylake-SP |
|||
April 2019 | Released | July 2017 | |||
[8200] Up to 28 [9200] Up to 56 |
Cores | [8100] Up to 28 | |||
1 MB L2 per core Up to 38.5 MB Shared L3 |
Cache | 1 MB L2 per core Up to 38.5 MB Shared L3 |
|||
Up to 48 Lanes | PCIe 3.0 | Up to 48 Lanes | |||
Six Channels Up to DDR4-2933 1.5 TB Standard |
DRAM Support | Six Channels Up to DDR4-2666 768 GB Standard |
|||
Up to 4.5 TB Per Processor | Optane Support | - | |||
AVX-512 VNNI with INT8 | Vector Compute | AVX-512 | |||
Variant 2, 3, 3a, 4, and L1TF |
Spectre/Meltdown Fixes |
- | |||
[8200] Up to 205 W [9200] Up to 400 W |
TDP | Up to 205 W |
New to the Xeon Scalable family is the AP line of processors. Intel gave a hint to these late last year, but we finally got some of the details. These new Xeon Platinum 9200 family of parts combine two 28-core bits of silicon into a single package, offering up to 56 cores and 112 threads with 12 channels of memory, in a thermal envelope up to 400W. This is essentially a 2P configuration on a single chip, and is designed for high-density deployments. These BGA-only CPUs will only be sold with an underlying Intel-designed platform straight from OEMs, and will not have a direct price – customers will pay for ‘the solution’, rather than the product.
For this generation, Intel will not be producing models with ‘F’ Omnipath fabric on board. Instead, users will have some ‘M’ models with 2 TB memory support and ‘L’ models with 4.5 TB memory support, focused for the Optane markets. There will also be other letter designations, some of them new:
- M = Medium Memory Support (2.0 TB)
- L = Large Memory Support (4.5 TB)
- Y = Speed Select Models (see below)
- N = Networking/NFV Specialized
- V = Virtual Machine Density Value Optimized
- T = Long Life Cycle / Thermal
- S = Search Optimized
Out of all of these, the Speed Select ‘Y’ models are the most interesting. These have additional power monitoring tools that allow for applications to be pinned to certain cores that can boost higher than other cores – distributing the power available to different places on the cores based on what needs to be prioritized. These parts also allow for three different OEM-specified base and turbo frequency settings, so that one system can be focused of three different types of workloads.
We are currently in the process of writing our main review, and plan to tackle the topic from several different angles in a number of stories. Stay tuned for that. We do have the SKU lists and our launch day news found here:
The Intel Second Generation Xeon Scalable:
Cascade Lake, Now with Up To 56-Cores and Optane!
The other key element to the processors is the Optane support, discussed next.
Optane DCPMM: Data Center Persistent Memory Modules
If you’re confused about Optane, you are not the only one.
Broadly speaking, Intel has two different types of Optane: Optane Storage, and Optane DIMMs. The storage products have already been in the market for some time, both in consumer and enterprise, showing exceptional random access latency above and beyond anything NAND can provide, albeit for a price. For users that can amortize the cost, it makes for a great product for that market.
Optane in the memory module form factor actually works on the DDR4-T standard. The product is focused for the Enterprise market, and while Intel has talked about ‘Optane DIMMs’ for a while, today is the ‘official launch’. Select customers are already testing and using it, while general availability is due in the next couple of months.
Me with a 128 GB module of Optane. Picture by Patrick Kennedy
Optane DC Persistent Memory, to give it its official title, comes in a DDR4 form factor and works with Cascade Lake processors to enable large amounts of memory in a single system – up to 6TB in a dual socket platform. The Optane DCPMM is slightly slower than traditional DRAM, but allows for a much higher memory density per socket. Intel is set to offer three different sized modules, either 128 GB, 256 GB, or 512 GB. Optane doesn’t replace DDR4 entirely – you need at least one module of standard DDR4 in the system to get it to work (it acts like a buffer), but it means customers can pair 128GB DDR4 with 512 GB Optane for 768 GB total, rather than looking at a 256 GB of pure DDR4 backed with NVMe.
With Optane DCPMM in a system, it can be used in two modes: Memory Mode and App Direct.
The first mode is the simplest mode to think about it: as DRAM. The system will see the large DRAM allocation, but in reality it will use the Optane DCPMM as the main memory store and the DDR4 as a buffer to it. If the buffer contains the data needed straight away, it makes for a standard DRAM fast read/write, while if it is in the Optane, it is slightly slower. How this is negotiated is between the DDR4 controller and the Optane DCPMM controller on the module, but this ultimately works great for large DRAM installations, rather than keeping everything in slower NVMe.
The second mode is App Direct. In this instance, the DRAM acts like a big storage drive that is as fast as a RAM Disk. This disk, while not bootable, will keep the data stored on it between startups (an advantage of the memory being persistent), enabling very quick restarts to avoid serious downtime. App Direct mode is a little more esoteric than ‘just a big amount of DRAM’, as developers may have to re-architect their software stack in order to take advantage of the DRAM-like speeds this disk will enable. It’s essentially a big RAM Disk that holds its data. (ed: I’ll take two)
One of the issues, when Optane was first announced, was if it would support enough read/write cycles to act as DRAM, given that the same technology was also being used for storage. To alleviate fears, Intel is going to guarantee every Optane module for 3 years, even if that module is run at peak writes for the entire warranty period. Not only does this mean Intel is placing its faith and honor into its own product, it even convinced the very skeptical Charlie from SemiAccurate, who has been a long-time critic of the technology (mostly due to the lack of pre-launch information, but he seems satisfied for now).
Pricing for Intel’s Optane DCPMM is undisclosed at this point. The official line is that there is no specific MSRP for the different sized modules – it is likely to depend on which customers end up buying into the platform, how much, what level of support, and how Intel might interact with them to optimize the setup. We’re likely to see cloud providers offer instances backed with Optane DCPMM, and OEMs like Dell say they have systems planned for general availability in June. Dell stated that they expect users who can take advantage of the large memory mode to start using it first, with those who might be able to accelerate a workflow with App Direct mode taking some time to rewrite their software.
It should be noted that not all of Intel's second generation Xeon Scalable CPUs support Optane. Only Xeon Platinum 8200 family, Xeon Gold 6200 family, Xeon Gold 5200 family, and the Xeon Silver 4215 does. The Xeon Platinum 9200 family do not.
Intel has given us remote access into a couple of systems with Optane DCPMM installed. We’re still going through the process of finding the best way to benchmark the hardware, so stay tuned for that.
Intel Agilex: The New Breed of Intel FPGA
The acquisition of Altera a couple of years ago was big news for Intel. The idea was to introduce FPGAs into Intel’s product family and eventually realize a number of synergies between the two, integrating the portfolio while also aiming to take advantage of Intel’s manufacturing facilities and corporate sales channels. Despite that happening in 2015, every product since was developed prior to that acquisition, prior to the integration of the two companies – until today. The new Agilex family of FPGAs is the first developed and produced wholly under the Intel name.
The announcement for Agilex is today, however the first 10nm samples will be available in Q3. The role of the FPGA has been evolving of late, from offering a general purpose spatial compute hardware to offering hardened accelerators and enabling new technologies. With Agilex, Intel aims to offer that mix of acceleration and configuration, not only with the core array of gates, but also by virtue of additional chiplet extensions enabled through Intel’s Embedded Multi-Die Interconnect Bridge (EMIB) technology. These chiplets can be custom third-party IP, PCIe 5.0, HBM, 112G transceivers, or even Intel’s new Compute eXpress Link cache coherent architecture. Intel is promoting up to 40 TFLOPs of DSP performance, and is promoting its use in mixed precision machine learning, with hardened support for bfloat16 and INT2 to INT8.
Intel will be launching Agilex in three product families: F, I, and M, in that order of both time and complexity. The Intel Quartus Prime software to program these devices will be updated for support during April, but the first F models will be available in Q3.
Columbiaville: Going for 100GbE with Intel 800-Series Controllers
Intel currently offers a lot of 10 gigabit Ethernet and 25 gigabit Ethernet infrastructure in the data center. The company launched 100G Omnipath a few years ago as an early alternative, and is looking towards a second generation of Omnipath to double that speed. In the meantime Intel has developed and is going to launch Columbiaville, its controller offering for the 100G Ethernet market, labeled as the Intel 800-Series.
Introducing faster networking to the data center infrastructure is certainly a positive, however Intel is keen to promote a few new technologies with the product. Application Design Queues (ADQs) are in place to help hardware accelerate priority packets to ensure consistent performance, while Dynamic Device Personalization (DDP) enables additional programming functionality within packet sending for unique networking setups to allow for additional functionality and/or security.
The dual-port 100G card will be called the E810-CQDA2, and we’re still waiting information about the chip: die size, cost, process, etc. Intel states that its 100 GbE offerings will be available in Q3.
Xeon D-1600: A Generational Efficiency Improvement for Edge Acceleration
One of Intel’s key product areas is the edge, both in terms of compute and networking. One of the products that Intel has focused on this area is Xeon D, which covers either the high efficiency compute with accelerated networking and cryptography (D-1500) and the high throughput compute with accelerated networking and cryptography (D-2100). The former being Broadwell well based and the latter is Skylake based. Intel’s new Xeon D-1600 is a direct D-1500 successor: a true single-die solution taking advantage of an additional frequency and efficiency bump in the manufacturing process. It is still built on the same manufacturing process as D-1500, allowing Intel’s partners to easily drop in the new version without many functional changes.
Related Reading
- Intel Xeon Scalable Cascade Lake: Now with Optane!
- Intel Agilex: 10nm FPGAs with PCIe 5.0, DDR5, and CXL
- Intel Columbiaville: 800 Series Ethernet at 100G, with ADQ and DDP
- Intel Launches the Xeon D-1600 Family: Upgrades to Xeon D-1500
- Lenovo’s New Cascade Lake ThinkSystem Servers: Up to 8 Sockets with Optane
- Dell PowerEdge Updates: Upgrade to Cascade Lake and Optane
- Supermicro Calvinballs Into Cascade Lake: Over 100 New and Updated Offerings
38 Comments
View All Comments
abufrejoval - Tuesday, April 2, 2019 - link
I immediately checked to see how he admitted being wrong, but while his visible output is at an all time low anyway he tends to wait until he has proof. I'd like Intel to be able to deliver and I'd like AMD and IBM to be able to deliver, too, but all that NV-RAM has disappointed far too often and it's hard to overcome that experience with little more than new slide ware.xrror - Thursday, April 4, 2019 - link
I didn't mean my comment as a slam against Charlie. Quite the opposite.To convince Charlie, one of Optane's harshest critics - I'd really love to have a transcript of that Q & A just to read what Intel specifically said.
MFinn3333 - Tuesday, April 2, 2019 - link
The speed of optane DIMM's is really kind of interesting. It ranges from 20% slower to 17x faster depending on the file system and workload.https://arxiv.org/pdf/1903.05714.pdf
Samus - Tuesday, April 2, 2019 - link
400 WATTS!? LMFAO.DannyH246 - Tuesday, April 2, 2019 - link
A 400W Intel power consumption rating implies a much higher peak power draw. A PSU with over 1200W output is needed for each 9282 chip (an 8 socket system would need over 10kW of power supply - BEFORE peripherals !!!)Haha - who would buy this crap?
FunBunny2 - Wednesday, April 3, 2019 - link
"Haha - who would buy this crap?"any Big Corp seeking to dump IBM mainframe. cheaper at twice the price. now, going from COBOL/file apps to RDBMS/*nix isn't a walk in the park. yes, the death of Big Iron has been just a couple of years away for a couple (or three) decades. may be now is the time.
duploxxx - Wednesday, April 3, 2019 - link
lollolol IAN for your comments regarding CPU side? What Intel marketing wheed have you been smoking before posting this optimistic stuff?A glued socket 56 cores consuming 400W. 12 dimms... no it is 6+6 just like the dual socket solution :) only 80 PCI-e lanes.... OEM customer specific no retail pricing....
You are aware that by mid 2019 there will be a 64core 128thread single socket 250W cpu to counter this shit? drop in compatible with already existing servers? At a way lower price than the top bin Platinum 8000 SKU.
Whoever thinks Intel has a massive CPU improvement this and next year is totally crazy, whoever continue to order intel cpu end of this year and next year for general server IT is also totally crazy. The only reason you might think of buying an Intel part is because of a specific high ghz low core count SKU for specific sw applications.
Icehawk - Wednesday, April 3, 2019 - link
In 23 years of being a sys admin... I have worked with exactly ZERO AMD systems. No company I have worked for has been willing to use non-Intel chips regardless of cost, efficiency, etc. Intel has a hard lock on corporate IT infrastructure somewhat like Apple still dominates the “creatives”. Not saying it’s right but it is reality.abufrejoval - Wednesday, April 3, 2019 - link
I've heard that said about the 360 and 370 and then came the PDP-11 and the VAX.I've head that said about the VAX and then came the PC.
I've jumped over Suns, eroded z-Series and p-Series using Linux until they bled and I will make sure AMD will have its moments in our data centers while it makes sense.
During my 35 years in IT I have typically been right, but rarely convinced anyone in the first or even the second round. By the time upper management believes that the ideas I feed them are their own, they typically do what I want.
Once you manage to get your ego out of the way, you can enjoy that it's easy and pays the bills.
FunBunny2 - Thursday, April 4, 2019 - link
"eroded z-Series and p-Series using Linux "those machines have been running linux for some years. one might wonder how many are running *only* linux? at the user-facing OS level, of course.