Intel Axes Data Center GPU Max 1350, Preps Max 1450 For ‘Different Markets’
Intel has effectively rebuilt its Data Center GPU Max series of computing GPUs.chip maker confirmed tom’s hardware The decision to remove the Data Center Max GPU 1350 from the product stack. However, later this year, we plan to introduce a freshly baked Data Center Max GPU 1450 with reduced I/O bandwidth to serve a “variety of markets.” The move follows Intel’s decision to cancel his Rialto Bridge GPU following a restructuring of his AXG graphics division.
The initial Ponte Vecchio GPU lineup consisted of the Data Center Max GPU 1550, Data Center Max GPU 1350, and Data Center Max GPU 1100. Intel has already launched the 1550 model in the first quarter of this year. The impending Q2 launch of the 1100 model is unaffected by the recent addition of the Data Center Max GPU 1450 to the product stack. However, Intel hasn’t provided a specific launch date for the Data Center Max GPU 1450 yet, but we do know it will launch this year.
“We launched the Intel Data Center Max GPU 1550 (600W) initially targeting only water cooling solutions. Later, we expanded our support by offering the Intel Data Center Max GPU 1550 (600W) including air cooling solutions. .
“As a result, we are streamlining our product offering by removing the Intel Data Center Max GPU 1350 (450W) designed for air-cooled solutions. We plan to introduce the Data Center GPU Max 1450 SKU in the second half of 2023. It supports IO bandwidth for different markets and enables air and liquid cooling solutions.To complete our product portfolio, we have Data Center, a 300W PCIe card (Gen5) for wide market deployment Introduces GPU Max 1100 SKU, Intel spokesperson said tom’s hardware.
|Header Cell – Column 0||Data Center GPUs up to 1100||Data Center GPUs up to 1350||Data Center GPUs up to 1450||Data Center GPUs up to 1550|
|tile + memory||?||?||?||39+8|
|Xe HPC Core | Compute Unit||56||112||?||128|
|512-bit vector engine||448||896||?||1,024 people|
|4096-bit matrix engine||448||896||?||1,024 people|
|Base clock (MHz)||1,000||750||?||900|
|Maximum Dynamic Clock (MHz)||1,550||1,550||?||1,600|
|L1 cache||12MB||48MB||?||64 MB at 105 TB/s|
|L2 Rambo Cache||108MB||216MB||?||408 MB at 13 TB/s|
|memory||48GB||96GB||?||128 GB at 3.2 TB/s|
|memory interface||1024 bits||?||?||1024 bits|
|Memory bandwidth (GB/s)||1,228.8||?||?||3276.8|
Intel has yet to share specifications for the Data Center GPU Max 1450. Nonetheless, Logic suggests it’s a cut-down version of the Data Center GPU Max 1550, which we haven’t confirmed yet.The Data Center GPU Max 1450 will support air and liquid cooling solutions. and is supposed to be able to run the same 450W TDP rating as the canceled 1350 model.
One of the key details in Intel’s statement is that the chip maker has tailored its data center GPU Max 1450 for “various markets”. This seems to have something to do with China.According to Intel, the Data Center GPU Max 1450 will be shipping with reduced I/O bandwidth levels. This is presumably to comply with US regulations regarding GPU exports to China.
US export regulations currently dictate that chip-to-chip I/O bandwidth for GPUs destined for China must be less than 600 GB/s. For example, Nvidia modified his H100 (Hopper) GPU and rebranded it to H800 to tailor it for the now restricted Chinese market. It’s reasonable to expect the Data Center GPU Max 1450 to be able to take a similar approach.
Besides replacing the Data Center Max GPU 1350, Intel has also expanded the cooling options for its flagship Data Center Max GPU 1550. The 600W GPU was initially only available in liquid cooling solutions. However, Intel now offers air cooling with its Data Center Max GPU 1550.
Intel’s original plan was to release the Rialto Bridge as the successor to Ponte Vecchio this year, and launch Falcon Shores in 2024 to replace the Rialto Bridge. However, the chipmaker has finally decided to abandon Rialtobridge and move Falcon Shores forward to his 2025. As a result, Intel will have at least two more years to deal with Ponte Vecchio, competing with Nvidia’s H100 and AMD’s Instinct MI300.