MITB Banner

How GraphLab’s Framework Powers Parallel Computing For Machine Learning

Share

graphlab-bn

The advances in computing hardware technology have paved way for easier implementations of machine learning and artificial intelligence applications to solve real-world problems. But most of these developments in computer architecture have focussed more on parallel scaling rather than frequency scaling. This technological digression is unfavorable for applications which has sequential ML algorithms. Therefore, this requires to have a general framework for ML models to work efficiently in the parallel computing environment.

In this article, we will discuss GraphLab (now called Turi), which is a distribution framework written in C++. It was originally developed by academics at Carnegie Mellon University for handling ML tasks, but now has extended to data mining applications as well. It provides a high-level programming interface for graphical ML algorithms, and is mostly suitable for sparse data and iterative algorithms. It also has a Python library to meet ML algorithmic requirements. It enables easy and efficient design of parallel ML algorithms by taking care of computation requirements, data consistency and scheduling.

Graphical ML Models: The Basis For Creating GraphLab

Carlos Guestrin, the founder of GraphLab, recalls that the platform was developed to speed up graphical ML model computations. These models were difficult to run on computation frameworks such as Hadoop which also took a really long time. Since the graphs were computation-intensive, they demanded a computation framework for ML algorithms, specifically that contained graphical data. Thus emerged GraphLab, a platform that it could address data as well as their computations simultaneously on a shared-memory architecture.

This is also made possible by providing a high-level data abstraction for users, which is evident in the data graphs. Hence, GraphLab obtains a fine balance between low-level abstractions such as POSIX threads, and high-level abstractions such as MapReduce models, without confusing ML experts with many intricate details. As mentioned earlier, GraphLab considers sparse data and iterative computations to achieve this balanced level of abstraction.

The GraphLab Data Model

According to the design developed by Carlos Guestrin and team at Carnegie Mellon University, the data model in GraphLab consists of a data graph and a shared data table. In the words of the researchers,

“The data graph G = (V, E) encodes both the problem specific sparse computational structure and directly modifiable program state. The user can associate arbitrary blocks of data (or parameters) with each vertex and directed edge in G. We denote the data associated with vertex v by Dv, and the data associated with edge (u → v) by Du→v. In addition, we use (u → ∗) to represent the set of all outbound edges from u and (∗ → v) for inbound edges at v. To support globally shared state, GraphLab provides a shared data table (SDT) which is an associative map, T [Key] → Value, between keys and arbitrary blocks of data.”

The researchers make use of Loopy belief propagation coupled with Markov Random Fields (MRF) for GraphLab’s framework to demonstrate its functionality in line with ML.

The Structure For Developing A Parallel Model In GraphLab

Referring back to the design for executing parallel ML algorithms, GraphLab follows on the three key steps which are put forth below:

  1. User-defined computation: GraphLab’s computation criteria stems from an ‘update function’ from the data, which is defined by the user. The computation can also be invoked through a sync mechanism that works with the graphical data.
  2. Data consistency: GraphLab provides three data models to balance consistency and system performance. They are labelled as full consistency model, edge consistency model and vertex consistency model. Considering the scope or the extent of data [f(v)] for the project, the full consistency model permits execution of function along the vertex data on a graph. The edge consistency model focuses on the edges of the whereas the vertex consistency model focuses on the vertex. Therefore, it is essential that the appropriate model is thought-out for ML models with a parallel computing function. The consistency models are illustrated below. 

    Consistency models depiction (Image courtesy: Carlos Guestrin)
  3. Scheduling: The ‘update schedule’ function takes care of the update functions applied to the vertices based on the consistency model, and is represented by a data structure called scheduler. This scheduler maps a dynamic list of vertex-functions combination and is executed in the GraphLab engine. The schedulers can be modified according to the ML algorithms, for example, in a residual neural network with backpropagation, there are task schedulers such as FIFO scheduler and Priority Schedulers that assigns ML tasks accordingly.

In order for a GraphLab program to execute successfully, these steps must be followed in a sequential order.

Conclusion

Parallel ML algorithms or models built with GraphLab provide a computing environment for parallel data structures. This means that much of the data structure features are yet to be fully realised in terms of their ML as well as computational potential. With GraphLab’s API, this can be achieved and ML professionals can easily import algorithms from other frameworks such as MapReduce.

Share
Picture of Abhishek Sharma

Abhishek Sharma

I research and cover latest happenings in data science. My fervent interests are in latest technology and humor/comedy (an odd combination!). When I'm not busy reading on these subjects, you'll find me watching movies or playing badminton.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.