The Rise of China GPU Makers: AI and Tech Sovereignty Drive New GPU Entrants

The number of Chinese GPU startups is staggering as China seeks to gain AI capabilities and semiconductor sovereignty, according to a new article. report from John Peddy ResearchAdditionally, the number of GPU manufacturers has increased around the world in recent years as the demand for artificial intelligence (AI), high performance computing (HPC), and graphics processing has grown at a fairly unprecedented rate. AMD and Nvidia maintain their lead when it comes to discrete graphics for PCs, but Intel is catching up.
18 GPU developers
Dozens of companies developed graphics cards and discrete graphics processors in the 1980s and 1990s, but fierce competition for the best performance in 3D games forced most of them out of business. rice field. By 2010, only AMD and Nvidia could offer competitive standalone GPUs for gaming and computing, while others focused on either integrated GPUs or GPU IP.
The mid-2010s saw a rapid increase in the number of China-based PC GPU developers. This is fueled by the country’s push for technology self-sufficiency and his emergence of AI and HPC as high-tech megatrends.
In total, 18 companies develop and manufacture GPUs, according to Jon Peddie Research. There are two companies developing SoC-bound GPUs primarily with smartphones and notebooks in mind, and there are six GPU IP providers, including AMD, Intel, Nvidia, who design graphics for PCs and There are 11 GPU developers focused on GPUs for data centers. A card that enters the list of best graphics cards.
In fact, adding other China-based companies such as: virene technology When Amagi Chishin Add to the list and there will be many more GPU designers. However, Biren and Tianshu Zhixin are currently focused only on AI and HPC, so JPR does not consider them GPU developers.
computer | DC | intellectual property | SoCs |
AMD | Billene | arm | apple |
bolt | Amagi Chishin | DMPs | Qualcomm |
Inosilicon | row 3 – cell 1 | Imagination technology | row 3 – cell 3 |
intel | Row 4 – Cell 1 | think silicon | row 4 – cell 3 |
Gingia | row 5 – cell 1 | verisilicon | Row 5 – Cell 3 |
MetaX | row 6 – cell 1 | Xi silicon | row 6 – cell 3 |
moore thread | row 7 – cell 1 | Row 7 – Cell 2 | Row 7 – Cell 3 |
NVIDIA | row 8 – cell 1 | Row 8 – Cell 2 | Row 8 – Cell 3 |
Sheat | row 9 – cell 1 | Row 9 – Cell 2 | Row 9 – Cell 3 |
Xiangdixian | row 10 – cell 1 | Row 10 – Cell 2 | Row 10 – Cell 3 |
Akinobu | row 11 – cell 1 | Row 11 – Cell 2 | Row 11 – Cell 3 |
China wants GPUs
As the world’s second-largest economy, China inevitably competes with the United States and other developed nations in almost every aspect, including technology. China has done a lot to attract engineers from all over the world and make it worthwhile to set up various chip design start-ups within the country. In fact, hundreds of new IC design companies are born every year in China. They develop all sorts of things from tiny sensors to complex communication chips, enabling the country to become self-sufficient from Western suppliers.
But to really jump on the AI and HPC bandwagon, China needs CPUs, GPUs, and dedicated accelerators. When it comes to computing, it’s impossible for a Chinese company to quickly leave the longtime CPU and GPU market leader behind. Still, it’s arguably easier, and perhaps more fruitful, to develop and produce a decent GPU than it is to try to build a competitive CPU.
“AI training was a big motivator [for Chinese GPU companies]Nvidia’s avoidance of high prices, and (probably mostly) China’s desire for self-sufficiency,” said Jon Peddie, head of JPR.
GPUs are inherently parallel. This means you can easily get your GPU up and running because it has a large number of compute units available for redundancy internally (assuming relatively low cost per transistor and decent overall yield ). Also, since GPUs are inherently parallel, it’s easier to scale out and parallelize. Considering China-based SMIC doesn’t have as advanced operational nodes as he does TSMC, this method of scaling performance looks good enough. In fact, even if his GPU devs in China lost access to his TSMC’s advanced nodes (N7 and below), at least some devs would create simpler GPU designs in his SMIC, Able to address AI/HPC and/or gaming/entertainment markets.
From the perspective of China as a country, AI and HPC-enabled GPUs could arguably be more important than CPUs. Because AI and HPC will enable entirely new applications such as self-driving cars, smart cities, and even advanced conventional weapons. Of course, the U.S. government restricts the export of supercomputer-bound CPUs and GPUs to China in order to slow down or limit the development of advanced weapons of mass destruction, but there are some pretty sophisticated AI-enabled GPUs. enables autonomous killer drones and can move drone swarms. For example, it represents a formidable force.
GPU microarchitecture relatively simple, hardware design expensive
On the other hand, keep in mind that while there are plenty of GPU developers out there, only two can actually build competitive discrete GPUs for PCs. This is probably because GPU architectures are relatively easy to develop, but really hard to implement properly and design good drivers.
CPU and GPU microarchitectures are essentially at the intersection of science and art. These are sophisticated sets of algorithms that can be developed by a fairly small group of engineers, but could take years to develop, Peddie says.
“[Microarchitectures] Clean up with napkins and whiteboards,” Pedi said.[As for costs] If only the architect himself [team] It can be from 1 to 3-4 people. [But] Buildings, rocketships, networks, processors, all types of architecture are complex chess games. Trying to predict where manufacturing processes and standards will be five years from him, where the cost performance trade-offs will be, what features to add and what to remove or ignore is very tricky and time consuming. It is a labor intensive task. […] Architects spend a lot of time in their heads doing what-if scenarios — if you make the cache 25% bigger, if you use 6,000 FPUs, if you need to do PCIe 5.0 I/O will you make it in time? ”
In a world where time to market is everything, microarchitectures can take years to develop and require talented designers, so many companies are opting for off-the-shelf microarchitectures, Arm or Imagination Technologies and others have licensed silicon-proven GPU IP. For example, his Innosilicon, a contract developer of chips and physical IP, has licensed his GPU microarchitecture IP from Imagination for the company’s Fantasy GPUs. There is another his GPU developer based in China who uses his PowerVR architecture from Imagination. Zhaoxin, on the other hand, uses the iterative GPU microarchitecture he got from Via Technologies, inherited from S3 Graphics.
Microarchitecture development costs vary, but are relatively cheap compared to the cost of physical implementations of modern high-end GPUs.
For years, Apple and Intel, with their wealth of engineering talent, relied on Img for their GPU designs (Apple still does to some extent). MediaTek and other smaller his SoC suppliers rely on Arm. Qualcomm has been with his ATI/AMD for a long time and Samsung is with AMD after trying to design its own graphics engine for several years.
Two of the new Chinese companies hired former architects from AMD and Nvidia to start GPU companies, and another two use Img. Time to market and learning your skills as an architect, what to worry about and how to find a fix is a very time consuming process.
“If you can go to a company that already has a design and has been designing for a long time, you save a lot of time and money. Time to market is everything,” said Jon Pedy Research’s head. says. “There are so many pitfalls. Not all GPUs designed by AMD or Nvidia are winners. [But] Good designs last for generations with minor tweaks. ”
New production nodes are prohibitively expensive to implement hardware and develop software. The International Business Times estimates the cost of designing fairly complex devices made using 5nm class technology. Exceed $540 million. These costs triple at 3nm.
“Including layout and floorplan, simulation, verification and drivers, [GPU developer] Costs and time skyrocket,” Peddie explains.
While there are only a few companies in the world capable of developing chips with the complexity of modern gaming and computing GPUs from AMD and Nvidia (46-80 billion transistors), China-based Biren has the BR104 and You can do something similar with the BR100. Devices (BR104 is estimated to contain about 38.5 billion transistors).
idea
Despite the exorbitant cost, 8 out of 11 PC/datacenter GPU designers are from China and speaks for itself. We probably won’t see any competitive discrete gaming GPUs outside of the American giants in the near future. One reason is that developing GPUs is difficult and time consuming. Additionally, these highly complex GPUs often require prohibitively expensive hardware implementations. It remains to be seen whether China will be able to attract competitive entrants, but failure does not result from a lack of effort.