Nvidia and Microsoft are working together to solve a big problem with Copilot+

Nvidia and Microsoft are collaborating on a project to develop Application Programming Interface (API).

When Microsoft announced the Copilot+ PC platform a few weeks ago, a common question was: 'Why can't I run these AI applications on my GPU?' At Computerx 2024, Nvidia finally gave an answer to this question.

Nvidia and Microsoft are collaborating on a project to develop an Application Programming Interface (API), allowing developers to run their AI applications on RTX graphics cards. This includes the various Small Language Models (SLMs) that are part of the Copilot runtime, which are used as the basis for new features that are getting a lot of attention like Recall and Live Captions.

With the above toolkit, developers can enable applications to run locally on the GPU instead of the NPU. This opens the door to not only more powerful AI applications because GPU AI processing performance is generally higher than NPU, but also the ability to run on PCs that are not currently part of the Copilot+ ecosystem.

It is no exaggeration to say that this is a revolutionary project. Copilot+ computers currently require a Neural Processing Unit (NPU) capable of performing at least 40 Tera operations per second (TOPS). At the present time, only Snapdragon X Elite meets that criterion. However, GPUs in general possess much higher AI processing capabilities, even low-end models can reach 100 TOPS, and higher-end products even have the same level of power many times greater.

In addition to running on GPUs, the new API also adds access-enhanced generation (RAG) capabilities to the Copilot runtime. RAG gives the AI ​​model access to locally specific information, allowing it to provide more useful solutions. We saw RAG on full display in Nvidia's Chat with RTX AI chatbot platform launched earlier this year.

Picture 1 of Nvidia and Microsoft are working together to solve a big problem with Copilot+

In addition to the API, Nvidia announced the RTX AI Toolkit at Computerx. This developer kit will officially launch in June, combining various tools and SDKs that allow developers to tailor AI models for specific applications. Nvidia says that by using the RTX AI Toolkit, developers can create models four times faster and three times smaller than using general open source solutions.

The tech world is seeing a wave of tools that allow developers to build specific AI applications for end users. Some of them have already appeared on PC Copilot+, and this trend looks like they will last until at least the end of next year. Ultimately, we have the hardware to run these applications; What's missing now is just the software.

Update 03 June 2024
Category

System

Mac OS X

Hardware

Game

Tech info

Technology

Science

Life

Application

Electric

Program

Mobile