Active Hackathon

IBM’s Castor Makes It Easy To Manage Infinite Data From IoT Devices

A time-series model needs frequent re-training to maintain the accuracy of the forecasts. For example, modelling weather data requires the data scientist to keep up with the pace of change in the environment to monitor the changes in a pattern which requires regular benchmarking of the predictive models.

In contrast to AI-driven cases using a small number of big models for image processing or natural language, IBM’s Castor aims at the Internet of Things applications which need numerous smaller models.


Sign up for your weekly dose of what's up in emerging technology.

Data Ingestion With IBM’s Cristata


Every model is associated with an entity describing where the data originates; for example, “Company ABC”, and a signal describing what is measured, like “hourly revenue”.

This system allows users to retrieve data with a simple command like: getTimeseries(servername, “entity”, “measure”)

The training of the models is done using Python or R. The models are stored separately from configuration and runtime parameters, which enables the user to change the details of the model without redeployment.

There is also a “time machine” feature, an interactive view showing the latest forecasts and observations.

Typical deployments see models trained every week or every month and scored every hour. Deployed, on IBM Cloud, Castor uses IBM’s DashDB, Compose, Cloud Functions, and Kubernetes for model provenance.

“With an entitled account on IBM Cloud, IBM Research Castor deploys in a matter of minutes, making it ideal for proof-of-concept as well as long-running projects. Client packages/SDKs for Python and R are provided so that data scientists can get up and running quickly in a familiar environment and visualisation teams can leverage familiar frameworks such Django and Shiny,” says the team behind Castor.

Castor’s Architecture

Here we can see that Castor uses well-defined, task-specific micro-services such as storing newly acquired time-series. These services are loosely coupled and are accessed via asynchronous messaging protocols.

In addition, the system is capable of directly ingesting data from external providers like the weather data. Models created by Data Scientists are stored in the model store.

A Castor model object can load data, transform data, train the model and then score it. So, to create a predictive model, the user has to implement a Castor model object. This object is then stored via Castor APIs in the model store.

Time series data is identified using contextual information and Castor’s configuration parameters allows the user to choose whether train or score the model. The “transform data” functionality consists of anomaly detection, feature engineering and data transformation. Anomaly detection system applies static rules to remove outliers from the data and also removes that part of data which has the most significant deviation.

It also includes statistical features, for example, the daily minimum, maximum, or average temperature, combination features; time of day with day type, solar radiance with the season; or domain-specific features like energy consumption peak hours and seasons.

The contextual data from IoT devices is a combination of an entity and a signal. A signal defines a physical quantity like temperature or revenue with respect to time. Complex semantic domain descriptions can be defined with Castor as it is capable of distinguishing hierarchical and topological relationships between entities.

To do this, the system makes use of multi-modal data stored in the back-end. A high-level Castor API covers the complexities associated with high-level intuitive user queries.

Scheduling And Triggering

Serverless Apache Openwhisk is employed to train and score the models. Model training (scoring) action triggers the execution of a docker container that retrieves the model code (code and core) and stores the model version to forecasts on successful completion of the task.

The triggering of subsequent actions on completion of training enables RabbitMQ, an open source message broker that supports multiple messaging protocols and can be deployed in distributed configurations to meet high-scale requirements.

A model scheduler works on 3 different cloud functions- Init, Poll and Update for deploying, scanning and updating tasks to the queues respectively.

To handle the bursty workloads, Castor adopts dynamic scaling where additional resources are customised for peak time data and scale back once the peak hour passes. This was made possible with OpenWhisk cloud service which enables multiple actions to be performed within a few seconds.

Cloud-based systems provide service elasticity to handle the intermittent surge in data from IoT devices. This flexibility in its use reduces the complexities involved in appending additional infrastructure code for scaling capabilities and accelerates the runtime for performing concurrent jobs.

So, with Castor, data scientists can:

  • Explore and retrieve all relevant time series and contextual information that is required for their predictive modelling tasks.
  • Seamlessly store and deploy their predictive models in a cloud production environment.
  • Monitor the performance of all predictive models in productions and (semi-) automatically retrain them in case of performance deterioration.

More Great AIM Stories

Ram Sagar
I have a master's degree in Robotics and I write about machine learning advancements.

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM