The role of a typical trader on Wall Street is to keep the inflows high and the losses low. The models that monitor stock prices are built to handle the risks that come with market shifts.
A data engineer, on the other hand, handles vast amounts of data using cloud infrastructure almost has a similar job — to avoid losses or delays of incoming data. Using cloud infrastructure to store and deploy machine learning models has become famous with many companies. However, deploying cloud infrastructure is extremely expensive and consumes a lot of energy.
Inspired by risk theories employed by stock market investors, MIT CSAIL researchers have collaborated with Microsoft to develop a “risk-aware” mathematical model that could improve the performance of cloud-computing networks across the globe.
Drawing Parallels From Risk Aware Models
The researchers at MIT statistically mapped three years’ worth of network signal strength from Microsoft networks which connect its data centres to a probability distribution of link failures. The following observations were made based on the similarities, which can be summarised as follows:
- In the context of network theory, data bandwidth shares can be thought of as the money invested and the equipment that is responsible for momentary ups and downs can be called “stocks”.
- Just like the values of stocks, the probability of network equipment failing is uncertain too.
- The input can be the network topology in a graph, with source-destination flows of data connected through lines (links and nodes), say cities, and each link assigned a bandwidth.
- To mimic its financial counterpart, a “risk-aware” model for cloud services can be built using the underlying principles such that, it guarantees data will reach its destination almost every time.
The fibre optic cables used by the cloud service providers run underground, connecting data centres in different cities. And, to monitor the traffic within these networks, a “traffic engineering” (TE) software is used, which tries to optimise data bandwidth allocation.
To address the challenges that come with traditional TE software, the researchers at MIT CSAIL designed a TE model that adapts core mathematics from “conditional value at risk,” a risk-assessment measure that quantifies the average loss of money.
How Cloud Can Benefit From Traffic Engineering
The new model, called TeaVar, takes into account failure probabilities of links between data centres worldwide, which is similar to predicting the volatility of stocks. Then an optimisation engine is run, which allocates traffic through optimal paths to minimise loss while maximising overall usage of the network. The model is designed so that major cloud-service providers like Microsoft, Amazon, and Google, who do share the majority of cloud services, can benefit better.
To obtain the probabilities of failure, the signal quality of every link was checked every 15 minutes. For instance, if the signal quality dropped below a certain receiving threshold, then the link was considered as a failure and vice versa.
These probabilities were used by the model to generate an average time of when each link was either up or down. This allowed the model to predict when risky links would fail at any given point of time.
When the researchers tested the model against other traffic engineering alternatives, the results showed that their model supported three times the traffic throughput as traditional traffic-engineering methods, while maintaining the same high level of network availability.
The model managed to keep the reliable links working to near full capacity, while steering data clear of riskier links. Check the code, which is available on GitHub.
- In experiments based on real-world data, the model supported three times the traffic throughput as traditional traffic-engineering methods.
- Better network utilisation can save service providers millions of dollars.
- The model could help major cloud-service providers — such as Microsoft, Amazon, and Google — better utilise their infrastructure.
- Results reveal that TeaVaR can support up to twice as much traffic as today’s state-of-the-art TE schemes at the same level of availability
Using risk modelling for stock markets to monitor the way data is stored shows how an interdisciplinary approach trumps other approaches. To find a solution at the cross-sections of network theory, stock markets and cloud computing is a great display of ingenuity by the researchers.
The authors behind this work believe that the success of this model will allow the companies to efficiently utilise data centre resources and save enormous amounts of energy consumed by the presence of cloud infrastructure.
Enjoyed this story? Join our Telegram group. And be part of an engaging community.
Register for our upcoming Data Engineering Workshop, in Mumbai & Gurugram, here.
Provide your comments below
What's Your Reaction?
I have a master's degree in Robotics and I write about machine learning advancements. email:email@example.com