Listen to this story
Multi-stakeholder organisation MLCommons has built new benchmarks to help advance the state of ML technology. The move was undertaken in an attempt to understand the performance characteristics of different hardware and software for machine learning (ML)—critical for organisations optimising its deployments.
The US-based firm’s MLPerf testing regimen is based as a series of different areas with benchmarks conducted throughout the year. A set of benchmarks were released on ML training data earlier in July 2022, with the latest set of MLPerf benchmarks for ML inference released in September 2022.
Sign up for your weekly dose of what's up in emerging technology.
These are based on the MLPerf Inference v2.1 update, which has introduced new models such as SSD-ResNeXt50 for computer vision, and a new testing division for inference over the network. This will assist in expanding the testing suite to replicate real-world scenarios.
With consistent training, a model learns whereas inference gives results from new data, such as the inference for image recognition in computer vision.
Vice president Vijay Janapa Reddi at MLCommons, said, “MLCommons is a global community and our interest really is to enable ML for everyone. What this means is actually bringing together all the hardware and software players in the ecosystem around machine learning so we can try and speak the same language.”
Reddi further said that speaking the same language is all about having standardised ways of reporting and claiming ML performance metrics. With different variables that are constantly changing, he emphasised that benchmarking is a challenging activity in ML interface; with MLCommons’ goal to measure performance in a standardised way to help track progress.