Apple made a bold move three years ago to take full control of its technology stack by switching to its own silicon computer chips. today, Apple released MLX.is an open source framework specifically tailored to perform machine learning on Apple’s M-series CPUs.
Most AI software development currently takes place on open source Linux or Microsoft systems, and Apple doesn’t want its thriving developer ecosystem to be left out of the latest big thing.
MLX aims to solve long-standing compatibility and performance issues with Apple’s unique architecture and software, but it’s more than just a technical play. MLX features a user-friendly design inspired by popular frameworks such as PyTorch, Jax, and ArrayFire. This introduction promises to further streamline the process of training and deploying AI learning models on Apple devices.
Architecturally, MLX differentiates itself with a unified memory model where arrays reside in shared memory, allowing operations to be performed across supported device types without duplicating data. This feature is very important for developers who want flexibility in their AI projects.
Simply put, integrated memory means that the GPU shares its VRAM with the computer’s RAM, allowing you to use your Mac’s RAM for everything rather than buying a powerful PC and then adding a powerful GPU with lots of vRAM.
However, the path to AI development on Apple Silicon has not been without its challenges due to its closed ecosystem and lack of compatibility with many open source development projects and widely used infrastructure.
“It’s exciting to see more tools like this for working with tensor-like objects, but I really hope Apple makes it easier to port custom models in a high-performance way.” said Hacker News hosted a discussion of the announcement.
Until now, developers had to convert their models to CoreML to run on Apple. Dependence on translators not ideal. CoreML focuses on transforming existing machine learning models and optimizing them for Apple devices. MLX, on the other hand, provides tools for innovation and development within the Apple ecosystem by efficiently creating and running machine learning models directly on Apple’s own hardware.
MLX performed well in benchmark tests. Compatibility with tools like Stable Diffusion and OpenAI’s Whisper represents an important step forward. In particular, performance comparisons reveal the efficiency of MLX, which outperforms PyTorch execution at higher batch size image generation rates.
For example, Apple report “With MLX, it takes about 90 seconds to completely generate 16 images, with classifier-free guidance it takes 50 diffusion steps, and with PyTorch it takes about 120 seconds.”
As AI continues to advance at a rapid pace, MLX represents an important milestone in the Apple ecosystem. This not only solves technical challenges, but also opens up new possibilities for AI and machine learning research and development on Apple devices. This is a strategic move considering Apple’s divorce from Nvidia and Apple’s own strong AI ecosystem.
MLX aims to make Apple’s platform a more attractive and viable option for AI researchers and developers, and it means a happier Christmas for AI-obsessed Apple fans.