Listen to this story
|
PyTorch today announced a collaboration with Apple’s Metal engineering team to introduce support for GPU-accelerated PyTorch training on Mac systems powered by M1, M1 Pro, M1 Max and M1 Ultra chips. Up until now, PyTorch training on Macs was only for CPUs, but after the launch of PyTorch v1.12, developers can use Apple’s silicon GPUs to accelerate model training processes, like prototyping and fine-tuning on the Mac itself.
Apple’s silicon Macs have a unified memory architecture that will provide GPUs with complete access to the full memory storage. A backend for PyTorch, Apple’s Metal Performance Shaders (MPS) help accelerate GPU training. MPS extends the PyTorch framework, offering scripts and frameworks for setting up and running operations on Macs. MPS optimises computing that is fine-tuned according to the varied features of each Metal GPU family.
Performance speedup from accelerated GPU training compared to CPUs, Source: PyTorch
Mac has become a reliable platform for training ML models. The unified memory architecture cuts down latency caused by data retrieval, reduces costs associated with cloud-based development and improves end-to-end performance.
A preview build of PyTorch v1.12 with GPU-accelerated training is available for macOS 12.3 or later with the native version of Python.