Declarative machine learning (ML) attempts to automate the generation of efficient execution plans from high-level ML problems or method specifications. The overriding goal is to make ML methods easy to use and/or construct, which is especially important in the context of complex applications. In this article, we will have a look into what declarative learning is and how a Toolbox called Ludwig built by UberAI can be used in the context of it. The major points to be discussed in this article are listed below.
Tables of Contents
- What is Declarative Learning?
- How Is Ludwig termed as Declarative ML ToolBox?
- Architectural Details
- Type-Based Abstraction
- Implementing Ludwig
Let’s start the discussion by understanding what declarative learning is in machine learning.
What is Declarative Learning?
Declarative ML intends to simplify the usage and/or creation of ML algorithms by isolating application or algorithm semantics from the underlying data representations and execution plans, resulting in a high-level definition of ML tasks or algorithms. The following are the certain properties of declarative ML:
Sign up for your weekly dose of what's up in emerging technology.
- Data Structure Independence: Inputs, intermediate results, and outputs such as matrices and scalars are exposed as abstract data types, with no access to the underlying physical data representations.
- Data Flow Properties Independence: Data flow properties are not exposed, i.e., the user has no explicit control over partitioning, caching, or blocking configurations at the specification level.
- Analysis-Centric Operation Primitives: The target analytics domain supports basic operation primitives. This includes linear algebra and statistical functions for ML algorithms and task-specific primitives and models for ML tasks.
- Operation Primitives with Known Semantics: The semantics of operation primitives used to specify ML tasks or algorithms are known to the system in terms of operating characteristics and equivalences.
- Operations that are implementation-agnostic: The specification of machine learning tasks or algorithms is unaffected by the underlying runtime operations. User-defined execution strategies and parameterization are not possible with this property.
- Well-Defined Plan Optimization Objective: ML tasks or algorithms use a well-defined (potentially multicriteria) objective for execution plan optimization to specify their expected results unambiguously.
- Results that are implementation-agnostic: The outcomes of machine learning tasks or algorithms, as well as individual operations, are equivalent (basically the same), regardless of the type or location of the underlying runtime operations.
- Deterministic Results: For multiple executions over the same input data and configuration, a given ML task or algorithm yields equivalent (essentially the same) results. Pseudorandom number generators are used to create randomized tasks or algorithms.
How Is Ludwig termed as Declarative ML ToolBox?
Ludwig is a deep learning toolbox built on the level of abstraction indicated above, with the purpose of encapsulating best practices and exploiting inheritance and code modularity. Ludwig makes it easier for practitioners to create deep learning models by simply stating their data and tasks, as well as to reuse, extend, and favour best practices.
The data types of the inputs encoded by the encoding functions (picture, text, series, category, etc.) and the data types of the predicted outputs by the decoding functions are used to name these equivalence classes.
This type-based abstraction allows for a more high-level interface than is currently provided in deep learning frameworks, which abstract at the tensor operation or layer level. This is achieved by providing abstract interfaces for each data type, allowing for an extension by allowing any new implementation of the interface to be produced.
Ludwig is based on the concept of a declarative model specification, which allows a much broader audience (including non-programmers) to use deep learning models, thereby democratizing them.
One of the fundamental elements that define Ludwig’s design is a type-based abstraction. Ludwig currently supports the following types: binary, numerical (floating-point values), category (unique strings), set of categorical elements, a bag of categorical elements, sequence of categorical elements, time series (sequence of numerical elements), text, image, audio (which doubles as speech when different preprocessing parameters are used), date, H3 (a geospatial indexing system), and vector (one dimensional tensor of numerical values). It’s simple to create more types thanks to the type-based abstraction.
Every model in Ludwig is made up of encoders that encode various aspects of an input data point, a combiner that combines the information from the various encoders, and decoders that decode the information from the combiner into one or more output features. Encoders-Combiner-Decoders is the generic name for this design (ECD). The figure below shows a representation of the situation.
Because it maps most deep learning model architectures and allows for modular building, this architecture is provided. Instead of constructing an entire model from scratch, data type abstraction allows you to define models by simply describing the data types of the input and output characteristics involved in the task and building standard sub-modules appropriately.
An ECD architecture instantiation can contain many input features of different or the same type, and the same is true for output features. Pre-processing and encoding functions are computed for each feature in the input portion based on the type of the feature, while decoding, metrics, and post-processing functions are computed for each feature in the output part based on the type of each output feature.
The ECD design allows for numerous instantiations, as demonstrated in Figure below, by combining distinct input qualities of different data types with diverse output features of other data types. An ECD with a text input feature and a categorical output feature can be trained to do text classification or sentiment analysis, while an ECD with a text input feature and a text output feature can be trained to do image captioning, and an ECD with categorical, binary, and numerical input features and a numerical output feature can be trained to do regression tasks like house pricing prediction.
Training our desired deep learning models is now way too easy using Ludwig. For training and testing, we tend to write a lot of code for the steps like pre-processing, model building, etc. But here in Ludwig we just need to define a model configuration file which is basically a YAML configuration file that specifies which tabular file columns are input characteristics and which are output target variables Although YAML stands for “YAML ain’t a Markup Language,” it is a human-readable data serialization language. It’s frequently used in configuration files and data storage and transmission applications. It is a file containing input and output definitions, as well as other parameters.
- Now let’s first install Ludwig and other dependencies using the pip command.
! pip install ludwig
! pip install petastorm
- Now create a model configuration file. For this, we can simply use a text editor app and will follow the minimum parameters and at the last change the extension of the file from .txt to .yaml and upload that file into your working directory.
Before defining configuration take a look at the dataset that I have used here which is taken from this Kaggle repository and is about Heart Failure Prediction.
Now, based on the dataset, our model config file looks like below. Here I have defined two output features.
- That’s it for all. Now simply summing the Linux command below will carry out all the needed steps such as encoding, normalizing, etc. Just we need to supply the training data set and config_file.
!ludwig train \
–dataset ‘/content/heart.csv’ \
After the complete execution of the above code, you can see the result. Here is the accuracy at the 1st epoch and the 100th epoch depicted below.
In this post, we covered what declarative learning is in machine learning. We had an introduction with Ludwig, a deep learning-based toolset. The toolbox has several advantages in terms of flexibility, extensibility, and ease of use, allowing both professionals and beginners to train deep learning models, utilize them to make predictions, and experiment with different architectures without writing code. Through this post, we had a very basic use case of this toolbox. I encourage you to go through the official documentation and research paper where you can address complex ML tasks in a way that we have discussed through this post.