Recently, Apple researchers, including C. V. Krishnakumar Iyer, Feili Hou, Henry Wang, Yonghong Wang, Kay Oh, Swetava Ganguli, Vipul Pandey, have developed Trinity, a no-code AI platform for complex spatial datasets.
The platform enables machine learning researchers and non-technical geospatial specialists to experiment with domain-specific signals and datasets to solve various challenges. It tailors complex Spatio-temporal datasets to fit standard deep learning models–in this case, Convolutional Neural Networks (CNNs), and formulate disparate problems in a standard way, eg. semantic segmentation.
Sign up for your weekly dose of what's up in emerging technology.
“It creates a shared vocabulary leading to better collaboration among domain experts, machine learning researchers, data scientists, and engineers. Currently, the focus is on semantic segmentation, but it is easily extendable to other techniques such as classification, regression, and instance segmentation,” as per the paper.
With the increase in smart devices, a high volume of data containing geo-referenced information is generated and captured. ML techniques have now entered the geospatial domain, including hyperspectral image analysis, high-resolution satellite image interpretation. However, deploying such solutions is still limited due to specific challenges, such as:
- Processing large volumes of Spatio-temporal information and applying ML solutions involves specialised skills and hence has a high barrier of entry, preventing non-technical domain specialists from solving problems on their own.
- The solution differs as data from residential areas will be very different from commercial ones, giving rise to non-standard preprocessing, post-processing, model deployment, and maintenance workflows.
- Engineers process data while scientists run experiments for different problems and involve a lot of back and forth. This hampers the ability to collaborate.
Trinity tackles these challenges by:
- Bringing information in disparate Spatio-temporal datasets to a standard format by applying complex data transformations upstream.
- Standardising the technique of solving disparate-looking problems to avoid heterogeneous solutions.
- Providing an easy-to-use code-free environment for rapid experimentation, thereby lowering the bar for entry.
It enables quick prototyping, rapid experimentation and reduces the time to production by standardizing model building and deployment.
Trinity is composed of data pipelines, an experiment management system, a user interface, and a containerised deep learning kernel.
- Platform’s feature store is maintained in S3 (Simple Server Storage). Intermediate data, inputs and processed predictions are stored in a distributed file system (HDFS). Metadata related to the experiments, including versions of models, are stored in an instance of a PostgreSQL DB running on an internal cloud infrastructure.
- Internal compute clusters hosting GPU and CPU.
- The training is containerised using Docker and orchestrated by Kubernetes running on the GPU Cluster for portability and packaging. Large-scale distributed predictions are carried out on CPU clusters orchestrated by YARN.
- Tensorflow 2.1.0 for training deep learning models. Spark on Yarn for data preprocessing, channel processing, label handling etc.
The deep learning kernel is at the heart of the platform and encapsulates neural net architectures for semantic segmentation and provides for model training, evaluation, handling of metrics, and inference. The kernel is currently implemented in TensorFlow but can easily be swapped for other frameworks.