The Intel Arc A380 has to be one of the worst graphics card launches ever. It’s the retail launch of the hardware, not necessarily the hardware itself. From all indications, Intel knew the driver was broken when the hardware was ready to be released earlier this year. With new GPUs from AMD and Nvidia on the horizon, rather than spending enough time fixing drivers before retail launch, Intel has decided to ship its Arc GPUs in China first.Decide if a product is worth making our list best graphics card.
A few months later, after a lot of negative publicity and numerous driver updates thanks to GPUs making inroads on other coasts, the Arc A380 is officially Available in the US, starting at $139 (opens in new tab)One Newegg product has sold out and is currently on backorder, but this is more likely due to limited supply than high demand. Still, the A380 isn’t all bad and we’re happy to see Team Blue returning to the dedicated GPU market after his 24+ year hiatus. (No, I actually Last year’s Intel DG1, because it only worked on certain motherboards. )
How does the Arc A380 compare to competing AMD and Nvidia GPUs, and what’s the AV1 hardware encoding acceleration hype? Let’s see where it lands for us GPU Benchmark Hierarchy, if you want a spoiler… not good. But let’s go into detail.
Arc Alchemist Architecture Summary
We provided extensive coverage Intel’s Arc Alchemist Architecture, which dates back to about a year ago. When I first wrote this article, we expected a launch in late 2021 or early 2022. That turned into a slated March 2022 release, which was eventually released in mid-2022 — at least not quite a full release yet.The Arc A380 is at the bottom of the price/performance ladder , is only the first salvo.we have seen many tips Faster Ark A750seems to be close to RTX3060 Performance based on Intel’s own benchmarks, expected to be released in the next month or so. What about products like the faster Arc A770 or the mid-tier Arc A580? Only time will tell.
Arc Alchemist represents a departure from Intel’s previous graphics designs. Intel has renamed some core building blocks, although there is probably a lot of duplication in certain elements. The “Execution Unit (EU)” is gone and is now called the Vector Engine (VE). Each VE can compute 8 his FP32 operations per cycle. This loosely translates to “GPU cores” or GPU shaders, roughly equivalent to AMD and Nvidia shaders.
Intel has grouped 16 VEs into a single Xe-Core, which includes other features as well. So each Xe-Core has 128 shader cores, which translates roughly to an AMD Compute Unit (CU) or Nvidia Streaming Multiprocessor (SM). They are all essentially SIMD (single instruction multiple data) designs, and like their competitors, Arc Alchemist hardened their shaders to meet the full feature set of DirectX 12 Ultimate.
This naturally means incorporating ray tracing hardware into the design, and Intel has one Ray Tracing Unit (RTU) per Xe-Core. The exact details of the ray tracing hardware aren’t entirely clear yet, but based on our testing, each Intel RTU could well match his Nvidia Ampere RT core.
Intel didn’t stop there. In addition to VE, RTU, and other popular graphics hardware, Intel has also added a matrix engine called the XMX engine (Xe Matrix eXtensions). These are similar in principle to his Nvidia’s Tensor Cores and are designed to process a lot of low-precision data for machine learning and other uses. The XMX engine is 1024 bits wide and can handle 64 FP16 or 128 INT8 operations per cycle, giving Arc GPUs a relatively large amount of computing power.
Intel Arc A380 Specifications
With a quick architectural overview out of the way, here are the Arc A380’s specs compared to competitive AMD and Nvidia GPUs. We provide theoretical performance here, but keep in mind that not all teraflops and teraops are created equal. Real-world testing is required to see how the architecture can actually perform.
|graphics card||Ark A380||RX6500XT||RX6400||GTX 1650 Super||GTX1650|
|architecture||ACM-G11||Navi 24||Navi 24||TU116||TU117|
|process technology||TSMC N6||TSMC N6||TSMC N6||TSMC 12FFN||TSMC 12FFN|
|Die size (mm^2)||157||107||107||284||200|
|GPU core (shader)||1024||1024||768||1280||896|
|The “core” of ray tracing||8||16||12||—||—|
|Base clock (MHz)||the year of 2000||2310||1923||1530||1485|
|Boost Clock (MHz)||2450||2815||2321||1725||1665|
|VRAM Speed (Gbps)||15.5||18||16||12||8|
|VRAM bus width||96||64||64||128||128|
|TFLOPS FP32 (Boost)||Five||5.8||3.6||4.4||3|
|TFLOPS FP16 (MXM/tensors where available)||40||11.6||7.2||8.8||6|
|video encoding||H.264, H.265, AV1, VP9||—||—||H.264, H.265 (Turing)||H.264, H.265 (Volta)|
|Release date||June 2022||January 2022||January 2022||November 2019||April 2019|
On paper, Intel’s Arc A380 is basically a competitor AMD RX 6500 XT When RX6400Also Nvidia’s GTX 1650 Super When GTX1650Looking at current online prices especially for the new cards, they are slightly cheaper than their competitors and have almost similar features.
Nvidia has no ray tracing hardware below the RTX 3050 (or RTX 2060). Similarly, neither AMD nor Nvidia GPUs in this segment support Tensor hardware, giving Intel a potential edge in deep learning and AI applications. Although it’s not entirely apple to apple.
Intel is currently the only GPU company to offer video encoding with AV1 and VP9 hardware acceleration. We expect AMD and Nvidia to add AV1 support to their upcoming RDNA 3 and Ada architectures, and possibly VP9 as well, but there’s no official confirmation of how that will roll out. We’ll also look at encoding performance and quality later in , but note that the GTX 1650 uses Nvidia’s older NVENC hardware, which provides lower quality output than newer Turing (and Ampere) versions. please.
The Arc A380 has a theoretical compute performance of 5.0 teraflops, slightly less than the RX 6500 XT, but better than all others. It is also the only GPU in this price class with 6 GB of his GDDR6 memory and a 96-bit memory interface. This gives the A380 more memory bandwidth than AMD, but less memory bandwidth than Nvidia’s GPUs, even without the Infinity cache. Power consumption is targeted at 75W, but like AMD and Nvidia GPUs, overclocked cards can exceed that.
Ray tracing capabilities are more difficult to pin down. In a quick summary, Nvidia’s Turing architecture on RTX 20-series GPUs has full hardware ray tracing capabilities, with each RT core capable of performing four ray/triangle intersection calculations per cycle, plus BVH (bounding volume hierarchy) has hardware support for traversal. Nvidia’s Ampere architecture adds a second ray/triangle intersection unit to the RT core, potentially doubling the throughput. In fact, according to Nvidia, Ampere’s RT cores are typically 75% faster due to not being able to fill all execution slots all the time. AMD’s RDNA 2 architecture is Turing-like in that it can perform four ray/triangle intersection computations per ray accelerator. However, BVH traversal uses GPU shaders, which is slow and memory intensive.
From what I can confirm (I asked Intel for clarification), Intel’s RTU is similar to Nvidia’s Turing RT core in that it can perform 4 ray/triangle intersections per cycle. Those he’s still trying to determine if they contain BVH traversal hardware or if they’re more like AMD’s ray accelerators and use GPU shaders for BVH traversal. It sounds like it contains BVH hardware, but it could actually be pretty decent.
Still, with only 8 RTUs, the A380 definitely won’t be a ray tracing powerhouse. For example, Nvidia’s RTX lineup has over 20 RT Cores, and if you include the Mobile RTX 3050 in the list, Nvidia’s slowest RTX chip, his 16 RT Cores. AMD, on the other hand, has only 12 ray accelerators on his RX 6000 series parts, and integrated RDNA 2 implementations like the Steam Deck only allow him to have 8 RAs. Ray tracing has been around for four years since hardware first supported the feature.