Alphabet CEO Sundar Pichai has introduced Vertex AI, a managed machine learning platform for deploying and maintaining AI models, during his keynote speech at the recently concluded Google I/O conference. The new platform brings AutoML and AI Platform together into a unified API, client library and user interface.
“When we were training algorithms before, we would have to run millions of test images,” said Jeff Houghton, chief operating officer of L’Oréal’s ModiFace, which develops augmented reality and AI digital services for the beauty industry.
“Now, we can rely on the Vertex technology stack to do the heavy lifting. Vertex has the computing power to figure out complex problems. It can do billions of iterations, and Vertex comes up with the best algorithms,” Houghton added.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Supports all open source frameworks
Vertex AI integrates with popular open-source frameworks such as TensorFlow, PyTorch, and scikit-learn. It also supports all ML frameworks through custom containers for training and prediction.
Unified UI for the entire ML workflow
It brings together the Google Cloud services for building ML under one unified UI and API. You can efficiently train and compare models using AutoML or custom code training. A central model repository stores all your models that can deploy to the same endpoints.
Vertex AI has pre-trained APIs for vision, natural language, video, among others. You can easily incorporate them into existing applications. You can also build new applications across use cases such as Translation and Speech to Text.
AutoML allows developers to train high-quality models as per their business needs with a central registry for all datasets, including vision and tabular.
Developers can leverage BigQuery ML to create and execute machine learning models using standard SQL queries on existing tools and spreadsheets. Alternatively, they can export datasets from BigQuery into Vertex AI for integration across the data-to-AI life cycle.
According to Google, the Vertex AI platform requires fewer lines of code to train a model.
Vertex AI’s custom model tooling supports advanced ML coding. It requires 80% lesser lines of code than other platforms to train a model with custom libraries. Its MLOps tools take away the complexity of self-service model maintenance and streamlines running ML pipelines and Vertex Feature Store to serve, share, and use ML features.
According to Google, without requiring formal ML training, data scientists can use Vertex AI as it offers every tool they need and allows them to manage their data, prototype, experiment, deploy, interpret and monitor the models in production.
To sum up the benefits, Vertex AI
- Enables training models without code and less expertise
- Helps build advanced ML models with custom tooling
- Removes the complexity of self-service model maintenance
Vertex AI can be used for:
- Creation of dataset and uploading data.
- Training ML model on your data
- Training the model
- Evaluating accuracy
- Deploy trained model to an endpoint for serving predictions.
- Sending prediction requests to the endpoint
- Specifying a prediction traffic split
- Managing models and endpoints
- Vertex Feature Store
- Vertex Model Monitoring
- Vertex Matching Engine
- Vertex ML Metadata
- Vertex TensorBoard
- Vertex Pipelines
Components of Vertex AI (Credit: Google)
Developers can ingest data from BigQuery and Cloud Storage and use Vertex Data Labeling to annotate high-quality training data and predict with more accuracy. Vertex Feature Store can be used to serve, share, and reuse ML features. They can use Vertex Experiments to track ML experiments and Vertex TensorBoard to visualise ML experiments.
Vertex Pipelines can be used to simplify the MLOps process and Vertex Training for fully managed training services. Additionally, Vertex Vizier offers maximum predictive accuracy, and Vertex Prediction simplifies the process of deploying models into production for online serving via HTTP or batch prediction for bulk scoring.
Developers can get detailed model evaluation metrics and feature attributions. Moreover, Vertex ML Edge Manager, which is still in the experimental phase, can facilitate seamless deployment and monitoring of edge inferences and automated processes with flexible APIs. This will allow developers to distribute AI across private and public cloud, on-premise and edge devices.
For models deployed in the Vertex Prediction service, continuous monitoring offers easy monitoring of model performance. It alerts when the signals deviate, finds out the cause and triggers model-retraining pipelines.
Vertex ML Metadata allows easier tracking of inputs and outputs to components in Vertex Pipelines for artefact, lineage, and execution tracking. Lastly, developers can track custom metadata directly from their code and query metadata using a Python SDK.