Landing a job at tech giant Google is like a dream come true for any engineer. One can get the opportunity of working with talented professionals as well as learn and share plenty of knowledge. However, cracking an interview for Google is a difficult task and one has to have in-depth knowledge and hands-on experience with projects. The interview comprises brain teasers like problem-solving questions, technical queries, and coding, among others.
In this article, we listed down the top 10 machine learning questions which have been asked at Google Data Science interview. Bear in mind that these interview questions have been collected from various sources — comments, reviews and discussion forums regarding interviews at Google.
1| What will you do if removing missing values from a dataset causes bias?
While working on a machine learning project, most of the researchers encounter missing data in the dataset. This missing data can create several issues in the project — it reduces the statistical power and even causes bias. However, there are several ways to fix this. For example, one can try replacing mean, median and mode, etc.in order to mitigate the issues of biases. In one of our articles, we discussed various ways to handle missing data/values in machine learning datasets.
2| How will you design a recommendation engine for jobs?
A recommendation system is an engine which basically acts like a filter that learns by the interest and data of the behavioural history of a user. In one of our articles, we discussed how the recommendation system of LinkedIn works and how the system generates suitable and matching jobs for a user.
3| What is Rectified Linear Unit (ReLU) in Machine learning?
Rectifies Linear Unit or ReLU is a widely used activation function which allows the positive values to pass and barring the negative values and in result, speeding up the whole process. In one of our articles, we discussed ReLU and why it is better than other non-linear activation functions.
4| What is the difference between a bagged model and a boosted model?
Bagging and Boosting are popular ensemble methods. Bagging is a way to decrease the variance in the prediction by generating additional data for training from the dataset using combinations with repetitions to produce multi-sets of the original data. While Boosting is an iterative technique which adjusts the weight of an observation based on the last classification. In one of our articles, we discussed the steps of these two methods along with their pros and cons.
5| What is AdaGrad algorithm in machine learning?
AdaGrad is an adaptive stochastic gradient descent algorithm which is used for gradient-based optimisation. Using AdaGrad provides several benefits such as it eliminates the need to manually tune the learning rate, convergence is faster and more reliable than simple Stochastic Gradient Descent when the scaling of the weights is unequal and other such.
Learn more here.
6| What is the degree of freedom for lasso?
The lasso is a popular model building technique that simultaneously produces accurate and parsimonious models. In linear regression, the degrees of freedom is the number of estimated predictors and it plays an important role in model assessment and selection. Degrees of freedom is often used to quantify the model complexity of a statistical modelling procedure. Basically, the number of nonzero coefficients is an unbiased and consistent estimate for the degrees of freedom of the lasso.
Click here to read more.
7| What are anomaly detection methods?
Anomaly detection is a technique which is used to identify unusual patterns that do not conform to expected behaviour known as outliers. There are several ways one can detect anomalies such as simple statistical methods, density-based anomaly detection, clustering-based anomaly detection, among others. In one of our articles, we discussed the method to approach anomaly detection using Big Data analytics.
Click here to know more.
8| What is AUC in machine learning?
AUC or Area under the ROC Curve is one of the most important evaluation metrics for checking any classification model’s performance. It is used to visualise the performance of the multi-class classification problem. AUC provides an aggregate measure of performance across all possible classification thresholds as well as measures the entire two-dimensional area underneath the entire ROC curve.
Click here to know more.
9| How does caching work and how do you use it in Data Science?
Caching is a high-speed data storage layer which stores a subset of data. This makes it easy for the users who make a future request for that data, The data can be accessed faster rather than searching into the data’s primary storage location.
Click here to know more.
10| Why use feature selection?
Feature Selection, also known as the variable selection or attribute selection is a method of reducing data dimension while doing predictive analysis. It reduces the number of attributes in the dataset in such a way that it includes and excludes attributes that are present in the data without changing them. In one of our articles, we discussed the various kinds of feature selection techniques in machine learning and why they play an important role in machine learning tasks.