SynapseML runs on Apache Spark and takes advantage of Spark’s large-scale fault-tolerant compute clusters management. The library has APIs for Python as well as Java, with the ability to generate bindings for Java, R, and C#.
In addition, it includes the HTTP on Spark module, allowing users efficient integration of web services into their pipelines and pre-built wrappers for invoking several such services, including Azure Cognitive Services.
To perform distributed inference on Spark, using ONNX, developers can deploy pre-trained models from Microsoft’s ONNX Model Hub or convert models built in other frameworks like TensorFlow or PyTorch.The Spark Serving module allows developers to expose their Spark pipelines as low-latency web services.
Hamilton, in his blog, said, “Our goal is to free developers from the hassle of worrying about the distributed implementation details and enable them to deploy them into a variety of databases, clusters, and languages without needing to change their code.”
SynapseML integrates several popular ML frameworks, including ONNX, CNTK, LightGBM, OpenCV, and Vowpal Wabbit. This helps in providing APIs that conform to the Transformer and Estimator abstractions defined by Spark’s ML pipelines.
SynapseML also includes tools for responsible AI, such as data balance analysis and model explainability. The library includes support for AutoML features, such as finding the best-performing model using hyperparameter search and Spark-native implementation of several models, including an anomaly-detection model for cyber security; an isolation forest model, which performs nonlinear outlier detection; and a conditional k-nearest-neighbour model.