Intel Shows off Ponte Vecchio 2-Stack GPU & Sapphire Rapids HBM CPU Performance Against NVIDIA’s A100
In the presentation by Intel Fellow & Chief GPU Compute Architect, Hong Jiang, we get some more details regarding the upcoming server powerhouses from the blue team. The Ponte Vecchio GPU comes in three configurations starting with a singular OAM and ranging up to an x4 Subsystem with Xe Links, either running solo or with a dual-socket Sapphire Rapids platform. The OAM supports all-to-all topologies for both 4 GPU and 8 GPU platforms. Complementing the entire platform is Intel’s oneAPI software stack which is a Level-Zero API that provides a low-level hardware interface to support cross-architecture programming. Some of the main features of the oneAPI include:
Interface for oneAPI and other tools to accelerator devices Fine gain control and low-latency to accelerator capabilities Multi-Threaded Design For GPUs, ships as a part of the driver
So coming to the performance metrics, a 2-Stack Ponte Vecchio GPU configuration like the one featured on a singular OAM is capable of delivering up to 52 TFLOPs of FP64/FP32 compute, 419 TFLOPs of TF32 (XMX Float 32), 839 TFLOPs of BF16/FP16 and 1678 TFLOPs of INT8 horsepower. Intel also details its maximum cache sizes and the peak bandwidth offered by each of them. The Register File size on Ponte Vecchio GPU is 64 MB and offers 419 TB/s of bandwidth, the L1 cache also comes in at 64 MB and offers 105 TB/s (4:1), and the L2 cache comes in at 408 MB and offers 13 TB/s bandwidth (8:1) while the HBM memory pools up to 128 GB and offers 4.2 TB/s bandwidth (4:1). There is a range of compute efficiency techniques within Ponte Vecchio such as: Register File:
Register Caching Accumulators
L1/L2 Cache:
Write Through Write Back Write Streaming Uncached
Prefetch:
Software (instruction) prefetch to L1 and/ or L2 Command Streamer prefetch to L2 for instruction and data
Intel explains that the larger L2 cache can deliver some huge gains in workloads such as 2D-FFT Case and DNN Case. Some performance comparisons between a full Ponte Vecchio GPU and a module down-configured to 80 MB and 32 MB have been shown. But that’s not all, Intel also has performance comparisons between the NVIDIA Ampere A100 running CUDA and SYCL against its own Ponte Vecchio GPUs using SYCL. In miniBUDE, which is a computational workload that can predict the binding energy of the ligand with the target, the Ponte Vecchio GPU simulates the test results 2 times faster than Ampere A100. There’s another performance metric in ExaSMR (Small Modular Reactors for large nuclear reactor designs). here, the Intel GPU is shown to offer a 1.5x performance lead over the NVIDIA GPU. It is a bit interesting that Intel is still comparing its Ponte Vecchio GPUs to Ampere A100 because the green team has since launched its next-gen Hopper H100 to the market and it’s already been shipping to customers. If Chipzilla feels so confident within its 2-2.5x performance figures, then I don’t think it will have any trouble competing well with Hopper unless otherwise.
Here’s Everything We Know About The Intel 7 Powered Ponte Vecchio GPUs
Moving over to the Ponte Vecchio specs Intel outlined some key features of its flagship data center GPU such as 128 Xe cores, 128 RT units, HBM2e memory, and a total of 8 Xe-HPC GPUs that will be connected together. The chip will feature up to 408 MB of L2 cache in two separate stacks that will connect via the EMIB interconnect. The chip will feature multiple dies based on Intel’s own ‘Intel 7’ process and TSMC’s N7 / N5 process nodes. Intel also previously detailed the package and die size of its flagship Ponte Vecchio GPU based on the Xe-HPC architecture. The chip will consist of 2 tiles with 16 active dies per stack. The maximum active top die size is going to be 41mm2 while the base die size which is also referred to as the ‘Compute Tile’ sits at 650mm2. We have all the chiplets and process nodes that the Ponte Vecchio GPUs will utilize, listed below:
Intel 7nm TSMC 7nm Foveros 3D Packaging EMIB 10nm Enhanced Super Fin Rambo Cache HBM2
Following is how Intel gets to 47 tiles on the Ponte Vecchio chip:
16 Xe HPC (internal/external) 8 Rambo (internal) 2 Xe Base (internal) 11 EMIB (internal) 2 Xe Link (external) 8 HBM (external)
The Ponte Vecchio GPU makes use of 8 HBM 8-Hi stacks and contains a total of 11 EMIB interconnects. The whole Intel Ponte Vecchio package would measure 4843.75mm2. It is also mentioned that the bump pitch for Meteor Lake CPUs using High-Density 3D Forveros packaging will be 36u. The Ponte Vecchio GPU is not 1 chip but a combination of several chips. It’s a chiplet powerhouse, packing the most chiplets on any GPU/CPU out there, 47 to be precise. And these are not based on just one process node but several process nodes as we had detailed just a few days back. Although the Aurora Supercomputer in which the Ponte Vecchio GPUs and Sapphire Rapids CPUs were to be used has been pushed back due to several delays by the blue team, it is still good to see the company offering more details. Intel has since teased its next-generation Rialto Bridge GPU as the successor to the Ponte Vecchio GPUs and is said to begin sampling in 2023. You can read more details on that here.