If you ever want to work and research in the Machine Learning arena, these are the 8 key terms you cannot ignore. Are you ready?
Sign up for your weekly dose of what's up in emerging technology.
Algorithms are a basic element in the world of Machine Learning. An algorithm is a logical sequence of instructions that describe step by step how to solve a problem.
Most often, the algorithm works as a sequence of simple if → then statements. Others are more complex and include mathematical equations or formulas.
The objective of a Machine Learning algorithm is to define the steps necessary to learn from the data and solve a problem autonomously.
Some popular Machine Learning algorithm families are clustering, regression, or recommendation algorithms.
2. Deep Learning
Deep Learning is a set of algorithms that seek to reproduce the same results as the human brain.
The algorithms follow a logic of layered processes that simulate the basic functioning of the brain through neurons. In Deep Learning these neurons are known as “layers”.
Just like our brain learns when we are faced with learning something new, how to speak, ride a bike, etc; algorithms seek this imitation by learning to recognize repetition patterns, specific words, frequent behaviors, so that they are able to automatically respond to input data, just as our brain responds to any input.
3. Neural Networks
Neural networks are a class of machine learning algorithms used to model complex patterns in data sets using multiple hidden layers and non-linear activation functions.
A neural network takes an input, passes it through multiple layers of hidden neurons, and generates a prediction that represents the combined input of all neurons.
Neural networks are trained iteratively using optimization techniques such as gradient descent. After each training cycle, an error metric is calculated based on the difference between the prediction and the target.
The derivatives of this error metric are calculated and propagated through the network using a technique called backpropagation. The coefficients (weights) of each neuron are adjusted based on how much they contributed to the total error.
This process is repeated iteratively until the network error falls below an acceptable threshold.
4 Natural Language Processing (NLP)
Natural Language Processing is a very broad term that encompasses all techniques related to the processing of human communications, both oral and written language.
Traditionally, NLP analysis was based on lexicographic rules. With the rise of Machine Learning, they can be combined with new AI tools like Deep Learning. Among them, we can highlight the LSTM networks.
The practical applications of NLP are many and have experienced spectacular growth thanks to the new techniques of Machine Learning.
There are multiple NLP applications, among which we can highlight: text translation, speech to text, text to speech, entity extraction, classification, sentiment analysis, and emotions, and even, chat bots and virtual assistants such as Alexa or Siri.
Regression problems seek to model the behavior of a quantitative variable (target variable) based on other predictor variables (components or features) that can be quantitative or qualitative with the usual objective of making predictions or estimates.
There are several algorithms to solve this type of problem. Among them, the following stand out:
–Linear regression: includes a series of coefficients and an independent term that, applied to the values of the component variables, allow generating approximately the value of the target variable. It requires the use of quantitative variables and stands out for its ease of interpretation.
–Decision trees: the sample is partitioned according to the depth of the tree and the values of the target variable in the leaf nodes are averaged. Its interpretation is simple and works with both qualitative and quantitative component variables.
–Random Forest and Gradient-Boosted Trees: these models, also available in classification problems can be used in regression to achieve a better fit of the target variable, while making the interpretation of the models difficult.
6. Reinforcement Learning
Reinforcement learning is one of the three types into which learning types are usually grouped (along with supervised and unsupervised learning). It differs from the other types in that it deals with goal-oriented algorithms. The algorithm must learn how to achieve a complex or long-term goal through several steps.
The following concepts shall be clearly identified
-Agent: the element that executes the actions.
–Policy: the strategy that decides what actions are executed based on a state.
–Environment: the world in which the agent moves.
–Reward: the measure on which the goodness of an action is decided.
–State: the situation of the environment at a specific time.
–Action: one of several acts that the agent can perform.
Tensorflow is the open source library developed by Google to carry out Machine Learning projects.
TensorFlow was created by the Google Brain team and released in 2015 under the Apache 2.0 license. Today it is one of the most widespread tools in the world of Machine Learning, particularly for the construction of networks of neurons.
Although TensorFlow is used mainly in the Machine Learning area, it can also be used for other types of algorithms that require numerical calculation tasks using data graphs.
There are other alternatives to TensorFlow on the market such as PyTorch from Facebook and MXNet from Amazon.
8. Computer Vision
Computer Vision is a broad area of expertise which aims to describe the world that we see in one or more images and to reconstruct its properties, such as shape, illumination, and color distributions. It is amazing that humans and animals do this so effortlessly, while computer vision algorithms are so error prone. Some real-life applications are:
–Optical character recognition (OCR): reading handwritten postal codes on letters
–Machine inspection: rapid parts inspection for quality assurance using stereo vision with specialized illumination to measure tolerances on aircraft wings or auto body parts or looking for defects in steel castings using X-ray vision.
–Retail: object recognition for automated checkout lanes
–3D model building (photogrammetry): fully automated construction of 3D models from aerial photographs used in systems such as Google Maps.
–Medical imaging: registering pre-operative and intra-operative imagery or performing long-term studies of people’s brain morphology as they age.
–Automotive safety: detecting unexpected obstacles such as pedestrians on the street, under conditions where active vision techniques such as radar or lidar do not work well.
–Match move: merging computer-generated imagery (CGI) with live action footage by tracking feature points in the source video to estimate the 3D camera motion and shape of the environment. Such techniques are widely used in Hollywood (e.g., in movies such as Jurassic Park) in precise matting to insert new elements between foreground and background elements.
–Motion capture (mocap): using retro-reflective markers viewed from multiple cameras or other vision-based techniques to capture actors for computer animation;
–Surveillance: monitoring for intruders, analyzing highway traffic, and monitoring pools for drowning victims.
–Fingerprint recognition and biometrics: for automatic access authentication as well as forensic applications.