Gaming PC

Intel Discloses New Details On Meteor Lake VPU Block, Lays Out Vision For Client AI

The first system is Intel’s upcoming Meteor Lake (14th Gen Core) system is still at least a few months away from being shown at Computex, but Intel is already laying the groundwork for the upcoming Meteor Lake launch. At this year’s show, which has quickly become an AI-centric event, Intel is using his Computex to demonstrate its vision of client-side AI inference for next-generation systems. This includes some new disclosures from Intel about the AI ​​processing hardware that will be on his Meteor Lake hardware and what Intel expects his OS and software developers to do with the new features. included.

Of course, AI has quickly become a viable buzzword in the tech industry over the past few months, especially following the public launch of ChatGPT and the explosion of interest in what is now called “generative AI.” So, as with the early adoption of any major new computing technology, hardware and software vendors alike are asking themselves: What can this new technology do, and what are the best hardware designs to power it? I’m still looking for. And behind it… Let’s just say there are a lot of potential revenues waiting for the companies that thrive in this new AI race.

Intel isn’t no stranger to AI hardware either, but it’s certainly not an area where companies best known for their CPUs and fabs (and in that order) usually get the highest payouts. Intel’s wholly-owned subsidiaries in this space include his Movidius, which manufactures low-power vision processing units (VPUs), and Habana Labs, responsible for his Gaudi family of high-end deep learning accelerators. But even among Intel’s general client offerings, the company has incorporated some very basic, ultra-low-power AI-adjacent hardware in the form of Gaussian & Neural Accelerator (DNA) blocks for audio processing. It’s been built into the Core family since the Ice Lake architecture.

Still, in 2023, the wind is clearly blowing toward adding more AI hardware at every level, from client to server. So when it comes to Computex, Intel has disclosed a bit more detail about their AI efforts for Meteor Lake.

Meteor Lake: SoC tiles include Movidius-derived VPUs for low-power AI inference

On the hardware side of the matter, the big announcement from Intel, as we’ve long suspected, is that they’re building more powerful AI hardware into their fragmented SoCs. Previously documented as an ‘XPU’ block within Meteor Lake’s SoC tile (middle tile) in some Intel presentations, Intel now confirms that this his XPU is a full AI acceleration block. I’m sure.

Specifically, this block is derived from Movidius’ 3rd Generation Vision Processing Unit (VPU) design and will now be properly recognized as a VPU by Intel.

Intel has provided a limited amount of technical details regarding their VPU blocks for Computex. There are no performance figures or information about how much die space is occupied on the SoC tile. However, Movidius’ latest VPU, Myriad X, incorporates a fairly flexible neural computing engine that gives the VPU neural network capabilities. Myriad X’s engine is rated at 1 TOPS throughput, but after almost 6 years and several process nodes, Intel is almost certainly aiming for even higher levels with Meteor Lake. .

VPUs are part of the Meteor Lake SoC tile, so they are present on all Meteor Lake SKUs. Intel does not intend to use this as a differentiator for features such as ECC and integrated graphics. So this will be the baseline feature available for all Meteor Lake based parts.

The VPU’s purpose is to provide a third-rail option for AI processing. For high-performance needs there is an integrated GPU, whose large ALU array can provide a relatively large amount of processing for the matrix operations behind neural networks. On the other hand, CPUs remain the best processors for simple low-latency workloads where you can’t afford to wait for VPU initialization or the size of the workload justifies the effort. This leaves the VPU in the center as a dedicated low-power AI accelerator used for sustained AI workloads that don’t require the performance (and power consumption) of the GPU.

It’s also worth noting that the GNA block will remain in Meteor Lake, although it’s not explicitly shown in Intel’s diagrams. Its feature is ultra-low power operation, so it is still needed for things like wake-on-voice and compatibility with existing his GNA-enabled software.

Aside from that, there’s still a lot we don’t know about Meteor Lake VPUs. The fact that it’s called a VPU and incorporates Movidius technology means that, like Movidius’ individual VPUs, it’s a computer vision focused design. In that case, Meteor Lake VPUs may be great at handling visual workloads, but lack performance and flexibility in other areas. And while looking at Intel’s disclosure today, one immediately raises the question of how this block’s performance and features compare to AMD’s Xilinx-derived Ryzen AI block, but those questions are separate. have to wait until

Intel feels well-positioned to lead the AI ​​transformation in the client space, at least for now. And they want the world to know, developers and users alike.

Software Side: What to do with AI?

As I said at the beginning, hardware is only half the equation when it comes to software accelerating AI. Even more important than what you run is how you do it, which Intel and its software partners are still working on.

At the most basic level, the inclusion of a VPU provides additional energy-efficient performance for tasks that are already more or less AI-driven on some platforms, such as dynamic noise suppression and background segmentation. is provided. In that respect, the inclusion of VPUs is catching up with smartphone-class SoCs, and today Apple’s Neural Engine and Qualcomm’s Hexagon NPUs are achieving similar speedups.

But Intel has its sights set on an even bigger prize. Both companies want to not only facilitate entirely new AI workloads, but also move current server AI workloads to the edge, i.e. moving AI processing to the client.

At this point we still don’t know what they are. Microsoft unveiled some of its own ideas at its annual Build conference last week. Copilot feature in Windows 11. OS vendors are also laying the groundwork for developers with the Open Neural Network Exchange (ONNX) runtime.

The whole tech world has come to a point where it’s gotten a new hammer, and now everything is starting to look like a nail. Even today’s disclosure is more aspirational than it pertains to the specific software side of the issue, so Intel isn’t exempt from that either. But we are in the very early days of AI, and no one knows exactly what it can and cannot do. Certainly, there are some nails that can be hammered.

To this end, Intel aims to foster a “ready to build” ecosystem for AI in the PC space. Provide hardware from the ground up and work with Microsoft to provide the software tools and APIs needed to use the hardware to see what new experiences developers can come up with or reduce power usage See what workloads can be moved to VPUs for .

Ultimately, Intel expects AI-based workloads to transform the PC user experience. And whether it happens outright or not, it’s enough to guarantee that the next generation of CPUs will have the hardware for that task.

Related Articles

Back to top button