The arrival of processors that incorporate an NPU has meant a revolution for technologies related to artificial intelligence, and the implementation of this chip allows them to achieve exceptional performance in most cases. Now, we know what the reality of processors is Ryzen AI 300 of amdand the power offered by these new APUs shows us the company’s capabilities to surpass what its main competitors offer, and that is that they have a performance in AI extremely high.
There is not a single day in which artificial intelligence is not talked about, either because of the advances that have been made or directly because of the way in which most companies are trying to sell it even though it is largely not something that is can be implemented (for now) on a large scale among users. But with the arrival of the new processors from the main companies this only becomes more echoed, and as we well know, both Intel and AMD have focused their processors for portable systems on offering the best performance related to this technology, and Now we know the comparison between the products of both companies.
With impressive performance, the Ryzen AI 300 could be the best CPUs for AI
One of the main aspects of processors that seek to offer the best performance in artificial intelligence is the functionality they have with the various LLMs on the market, even the most sophisticated ones that can be found on the internet. In this case, since amd They have carried out a series of tests that compare their chip with that of their main rival, Intel, in artificial intelligence functions, and in this case the brand that created the Ryzen wins by a lot, although in the comparison a Ryzen AI 9 HX 375 compared to an Intel Core Ultra 7 258V.
The AMD processor has a series of advantages that manage to offer superior performance, something that we can see in the generation of tokens that both processors have in the LLM based on Llama.cpp, LM Studio, presenting a maximum advantage of 27% of additional performance compared to that of Intel, without leaving aside the latency, since it is practically 3 times lower than that of its opponent.
But this is not all it offers, since these data are taken by using only the main modules for AI that would be CPU+NPU, and that is that by activating the iGPU together with the function called variable graphics memory (VGM), the performance It increases even more, in the case of using only the iGPU we see how it manages to exceed by 16 tokens what the maximum offered in Meta Llama 3.2 1b, while with the VGM, it increases by almost 31 tokens compared to the standard form.
One of the demos they offer compares the models provided with Intel AI Playground, which are Mistral 7b Instruct v0.3 and Microsoft Phi 3.1 Mini Instruct, using comparable quantization in LM Studio, they show us that the AMD Ryzen AI 9 HX 375 it is 8.7% faster on Phi 3.1 and 13% faster on Mistral 7b Instruct 0.3.