Nvidia’s A800 Compute GPU Price Jumps 20% on Export Fears

Prices for Nvidia’s A800 chips in the Chinese market have skyrocketed following rumors that the US government may ban the sale of high-end computing GPUs to China. According to reports, prices have soared by as much as 20% in just two weeks. Digi Times.
List prices for Nvidia’s A800 computing GPUs in the PCIecard form factor were around RMB 90,000 (US$12,400) per unit just two weeks ago. Prices are now approaching RMB 110,000 (US$15,000) per his, which is a 20% increase.
At the 2023 World Artificial Intelligence Conference, the issue of soaring chip prices alongside the impending shortage of computing power in China attracted attention. It’s getting harder and harder to get high-end AI chips through legitimate channels in China. As a result, Chinese technology companies that need computing power have turned to cloud computing services offered by companies such as Amazon AWS and Microsoft Azure. These services are typically used for large-scale language model training tasks in data centers located in Singapore or China.
Chen Pei, vice president of Vibranium Consulting, said leasing cloud GPU computing power is significantly more costly than building your own GPU computing cluster, with large cloud providers charging per GPU per hour. He said it’s hovering around $2-$3. Echoing this, Sun Jin of China’s AI and computer vision firm CloudWalk Technology said that if Chinese companies have no choice but to bear the high cost of leasing their computing power to the cloud revealed that there are many We do this even though these costs are 50-100% higher than building your own data center.
However, it is uncertain whether Chinese vendors will be able to continue using US-based cloud computing services such as AWS and Azure in the future, as it will depend on regulatory actions taken by the US government. This may require a license or even a complete ban.
China’s computing industry faces multiple significant challenges, including the time-consuming process of building AI computing clusters, the difficulty in sourcing high-end AI chips from abroad, and the shortage of domestic AI chips such as Biren’s BR104 and BR100. confronting. Given the potential restrictions on access to cloud computing capacity in Europe and the United States, we hope that Chinese semiconductor manufacturers will focus on improving their chip manufacturing processes and making progress in their software innovations. expectations are rising.
row 0 – cell 0 | Billen BR104 | Nvidia A800 | Nvidia A100 | Nvidia H100 |
form factor | FHFLCard | FHFL card(?) | SXM4 | SXM5 |
number of transistors | ? | 54.2 billion | 54.2 billion | 80 billion |
node | N7 | N7 | N7 | 4N |
Power | 300W | ? | 400W | 700W |
FP32 TFLOPS | 128 | 13.7(?) | 19.5 | 60 |
TF32+ TFLOPS | 256 | ? | ? | ? |
TF32 TFLOPS | ? | 109/218* (?) | 156/312* | 500/1000* |
FP16 TFLOPS | ? | 56(?) | 78 | 120 |
FP16 TFLOPS tensor | ? | 218/437* | 312/624* | 1000/2000* |
BF16 TFLOPS | 512 | 27 | 39 | 120 |
BF16 TFLOPS tensor | ? | 218/437* | 312/624* | 1000/2000* |
INT8 | 1024 | ? | ? | ? |
INT8 TFLOPS tensor | ? | 437/874* | 624/1248* | 2000/4000* |