AMD’s Ryzen AI 300 series of mobile processors has shown superior performance compared to Intel’s mobile processors in large language model (LLM) tasks. In a recent blog post, AMD detailed the tests they conducted to showcase their processors’ capabilities and provided insights on how users can optimize the popular LLM program LM Studio for their needs.
The tests conducted by AMD were primarily carried out in LM Studio, a desktop application that allows users to download and host LLMs locally. This software, based on the llama.cpp code library, provides options for CPU and/or GPU acceleration to power LLMs and gives users control over the models’ functionalities.
Using various LLM models, including Meta’s Llama 3.2, Microsoft Phi 3.1 4k Mini Instruct 3b, Google’s Gemma 2 9b, and Mistral’s Nemo 2407 12b, AMD tested laptops equipped with their Ryzen AI 9 HX 375 processor against Intel’s Core Ultra 7 258V. The tests measured speed in tokens per second and the time taken to generate the first token, indicating the performance of the processors in handling LLM tasks.
The results of the tests showed that the Ryzen AI 9 HX 375 outperformed Intel’s Core Ultra 7 258V across all tested LLMs, demonstrating better speed and quicker output of text. The Ryzen chip showed a 27% improvement over Intel’s processor in certain scenarios. It’s important to note that the tested AMD laptop was equipped with slower RAM compared to the Intel machine, which typically impacts LLM performance.
It’s worth mentioning that the comparison between AMD’s flagship Strix Point chip and Intel’s midrange processor may not be entirely fair, given the differences in specifications. AMD also highlighted the GPU acceleration features in LM Studio, showcasing the advantages of using the Ryzen AI 300-series laptops for AI tasks.
In the competitive landscape of AI performance, vendors are keen to showcase the significance of AI capabilities to end-users. Applications like LM Studio aim to provide a user-friendly interface for leveraging advanced LLMs for personal use. While the importance of large language models and optimizing computer performance for LLM tasks may vary among users, the advancements in AI technology continue to shape the computing industry.
As the demand for AI-driven applications grows, the performance of processors in handling complex tasks like LLMs becomes increasingly crucial. AMD’s Ryzen AI 300 series processors have demonstrated their capabilities in outperforming Intel’s mobile processors in AI tasks, setting a benchmark for performance in the evolving landscape of artificial intelligence.