Firm Estimates Intel’s GPU Unit Losses at $3.5 Billion, Suggests Sell Off
the head of John Peddy Research, a leading graphics market analysis firm with a history of nearly 40 years, has suggested that Intel may retire the Accelerated Computing Systems and Graphics Group (AXG). The division has been losing money for years and has not been able to offer a competitive product to any of the market segments it serves. Forget the best graphics card. Intel only needs to ship fully functional GPUs.
$3.5 billion loss
Jon Peddie estimates that Intel has invested about $3.5 billion in developing discrete GPUs and these investments have yet to pay off. In fact, Intel’s AXG has officially lost him $2.1 billion since it was officially launched in Q1 2021. Given the track record of Intel CEO Pat Gelsinger, who has shut down six businesses since early 2021, JPR suggests that his AXG could be next.
“Gelsinger isn’t afraid to make tough decisions and if his favorite projects don’t pay off, he will drop them, even if they are his personal favorites,” says Peddy. wrote in a blog post.[…] Rumors have hinted that the party is over and AXG will be the next group to be dumped.
When Intel announced its plans to develop discrete graphics solutions in 2017, it announced plans to power compute, graphics, media, imaging, and machine intelligence functions for client and data center applications with GPUs. As an added bonus, the Core and Visual Computing Group was intended to serve the emerging edge computing market.
After five years of working with discrete GPUs, the company has released two low-end standalone GPUs aimed at cheap PCs and some data center applications. Announced low-power graphics architecture for integrated GPUs. We provided a single API that can be used to program CPUs, GPUs, FPGAs, and other compute units. Canceled his Xe-HP GPU architecture for data center GPUs. Delayed (multiple) shipments of Ponte Vecchio computing GPUs for AI and HPC applications (most recently due to late arrival of Intel 4 node) and delayed launch of Xe-HPG ACM-G11 gaming GPUs by about a year .
Considering the already delayed market launch of Intel’s Arc Alchemist 500 and 700 series GPUs, and the fact that they have to compete with AMD and Nvidia’s next-generation Radeon RX 7000 and GeForce RTX 40 series products, the market is very likely to be put into failure. This clearly increases Intel’s losses.
to use an ax or not to use an ax
Given Intel’s AXG track record, Intel has spent $3.5 billion on it without any tangible success so far, argues Jon Peddie. For Intel, the loss isn’t surprising, as discrete GPUs are an entirely new market requiring significant investment. Meanwhile, Intel’s own Habana Gaudi2 deep learning processor shows a pretty clear performance advantage over Nvidia’s A100 in AI workloads in Intel’s Ponte Vecchio market. This success could tip the scales towards his AXG decommissioning.
“Whether or not Intel winds things down and exits is a 50-50 guess,” Peddie said. “Otherwise, the company will face years of losses as it tries to enter a friendly and unforgiving market.”
Strategic Importance of GPUs
However, while it might make sense for Intel to lay off the AXG group and stop developing discrete GPUs to cut losses, Intel in general has made some progress in its AXG division, especially in discrete GPU development. It should be noted that we are pursuing a strategically important direction for The list of development directions includes:
- AI/DL/ML applications
- HPC applications
- Competitive GPU architecture and IP for client discrete and integrated GPUs and custom solutions from IFS
- Data center GPU for rendering and video encoding
- Edge computing applications with discrete or integrated GPUs
- Hybrid processing unit for AI/ML and HPC applications
Discrete GPU development itself has so far only cost Intel (one wonders how much profit Intel made after two years of the Xe-LP iGPU architecture on the market). Let’s see), without a competitive GPU-like architecture capable of everything from low-end laptops to supercomputers, Intel can’t address many new growth opportunities.
Habana Gaudi2 looks like a competitive DL solution, but it cannot be used for supercomputing applications. Additionally, without further evolution of Intel’s Xe-HPC data center GPU architecture, the company will be unable to build hybrid processing units for his AI/ML and HPC applications (such as Falcon Shores). Without his XPU like that, Intel’s ZettaFLOPS plans through 2027 start to look increasingly unrealistic.
Intel’s efforts with discrete GPUs have not lived up to expectations, but Intel needs an explicitly parallelized computing architecture for future application workloads. GPUs are the architecture of choice for highly parallel workloads, whether they require low computational precision, like AI/DL/ML applications, or full FP64 precision, like supercomputing applications. is proven.
If Intel were to stop developing standalone GPUs, it would have to completely redesign its roadmap, both in product and architecture. For example, the small in-house iGPU development team within Intel is unlikely to offer an integrated graphics solution that competes with that offered by AMD, so finding a provider of competitive GPU architectures for client processors is a must. there is. Apple for the client System on Chip (SoC).
Overview
Intel’s discrete GPU efforts, which may have already cost Intel around $3.5 billion, have yielded nothing so far and could create further losses. Eliminating the AXG division seems like an increasingly attractive business decision. However, GPUs and derivative hybrid architectures are strategically important to the many markets Intel serves and the applications it needs to serve going forward, so firing the AXG group seems counterproductive. In many cases, it could be down to Intel’s graphics driver issue, but fixing the driver is not an easy solution.
What about Pat Gelsinger? Perhaps we will find it sooner or later. “Maybe the clouds will clear by the end of the quarter,” thinks Jon Peddie.