AMD’s Strix Point APUs are making waves in the world of AI LLM workloads, outperforming Intel’s Lunar Lake offerings. As the demand for higher performance in AI workloads rises, companies are introducing specialized hardware to stay competitive. AMD took a step in the right direction with its Strix Point APUs designed for mobile platforms, claiming a significant lead over Intel in AI processing power while reducing latency for quicker results.
According to AMD, the Ryzen AI 300 processors in the Strix Point APUs can deliver higher Tokens per second compared to Intel’s Lunar Lake chips, specifically excelling in consumer LLM applications in LM Studio. The Ryzen AI 9 HX 375, for example, offers up to 27% better performance than Intel’s Core Ultra 7 258V in LM Studio. This performance boost is crucial as LLMs continue to evolve, requiring faster and more efficient hardware solutions.
In addition to performance gains, the Strix Point APUs also shine in the latency department. The Ryzen AI 9 HX 375 boasts up to 3.5 times lower latency than its Intel counterpart, achieving up to 50.7 tk/s compared to 39.9 tk/s in Meta Llama 3.2 1b Instruct. This reduced latency translates to quicker processing times and improved overall efficiency.
One key feature of both Intel Lunar Lake and AMD Strix Point APUs is their powerful integrated graphics capabilities. The LM Studio software leverages the iGPU to enhance LLM performance using the Vulkan API. AMD’s Strix Point APUs feature Radeon graphics based on the RDNA 3.5 architecture, providing up to a 31% performance boost for Llama 3.2. Additionally, the VGM (Variable Graphics Memory) in Ryzen AI 300 processors allows for memory reallocation for iGPU tasks, resulting in a 60% increase in performance when combined with GPU acceleration.
To ensure a fair comparison, AMD tested both CPUs in the Intel AI Playground with the same settings. The results showed that the Ryzen AI 9 HX 375 was up to 8.7% faster than the Core Ultra 7 258V on Microsoft Phi 3.1 and up to 13% faster on Mistral 7b Instruct 0.3 model. While these results are impressive, it would be interesting to see how the Ryzen AI 9 HX 375 stacks up against Intel’s flagship Core Ultra 9 288V processor, given that the HX 375 is already the fastest Strix Point CPU available.
Overall, AMD’s focus on making LLMs more accessible to users without technical expertise is commendable. The LM Studio, based on the llama.cpp framework, offers a user-friendly approach to working with LLMs and showcases the power and efficiency of AMD’s Strix Point APUs. As AI workloads continue to evolve, having hardware solutions like the Strix Point APUs will be essential for meeting the growing demand for faster and more efficient processing power.