Listen to this story
|
In a significant move to advance the capabilities of their machine learning framework, Apple has announced the open-sourcing of Core ML Stable Diffusion XL (SDXL) for its cutting-edge Apple Silicon architecture. The new model, which has grown threefold in size, boasting around 2.6 billion parameters, brings a host of powerful features that enhance performance while maintaining efficiency.
Click here to check out the GitHub repository.
SDXL has been a topic of much anticipation and speculation among developers and machine learning enthusiasts, and with its open-sourcing today, the excitement is reaching new heights.
One of the highlights of this release is the new model compression technique, which enables variants of SDXL to be quantised to as little as 3 bits with minimal output difference. This breakthrough paves the way for faster and more efficient inference on Apple Silicon devices.
Since the announcement, the GitHub repository for Core ML Stable Diffusion has garnered an astounding 15,000 stars, indicating the keen interest and support from the developer community.
To ensure a seamless user experience, Apple has gone the extra mile to include SDXL support in both the conversion and inference package, allowing developers to easily integrate it into their workflows. The company has also provided SDXL support in a new demo app, showcasing the capabilities of the model and the power of Core ML on Mac.
Apple’s commitment to providing top-of-the-line machine learning tools is further evident with the introduction of mixed-bit quantisation for Core ML. Earlier this year, Apple also had an open source Transformer model for Apple Silicon to further innovation in large language models. Now with Stable Diffusion, Apple is garnering all the love from its developer ecosystem.