Here are the list of questions and answers that can help you prepare for your interview for Machine Learning. Remember to check on this page regularly as it gets updated continuously with more questions and answers.
Q. What are parametric models? Give an example.
Answer:
Parametric models are those with a finite number of parameters. To predict new data, you only need to know the parameters of the model. Examples include linear regression, logistic regression, and linear SVMs.
Non-parametric models are those with an unbounded number of parameters, allowing for more flexibility. To predict new data, you need to know the parameters of the model and the state of the data that has been observed. Examples include decision trees, k-nearest neighbors, and topic models using latent dirichlet analysis.
Q. What is the “Curse of Dimensionality?”
Answer:
The difficulty of searching through a solution space becomes much harder as you have more features (dimensions). Consider the analogy of looking for a penny in a line vs. a field vs. a building. The more dimensions you have, the higher volume of data you’ll need.
Q. Explain the Bias-Variance Tradeoff.
Answer:
Predictive models have a tradeoff between bias (how well the model fits the data) and variance (how much the model changes based on changes in the inputs). Simpler models are stable (low variance) but they don’t get close to the truth (high bias). More complex models are more prone to being overfit (high variance) but they are expressive enough to get close to the truth (low bias). The best model for a given problem usually lies somewhere in the middle.
Q. What is the difference between stochastic gradient descent (SGD) and gradient descent (GD)?
Answer:
Both algorithms are methods for finding a set of parameters that minimize a loss function by evaluating parameters against data and then making adjustments. In standard gradient descent, you’ll evaluate all training samples for each set of parameters. This is akin to taking big, slow steps toward the solution. In stochastic gradient descent, you’ll evaluate only 1 training sample for the set of parameters before updating them. This is akin to taking small, quick steps toward the solution.
Q. When would you use GD over SDG, and vice-versa?
Answer:
GD theoretically minimizes the error function better than SGD. However, SGD converges much faster once the dataset becomes large. That means GD is preferable for small datasets while SGD is preferable for larger ones. In practice, however, SGD is used for most applications because it minimizes the error function well enough while being much faster and more memory efficient for large datasets.
Q. What is the Box-Cox transformation used for?
Answer:
The Box-Cox transformation is a generalized “power transformation” that transforms data to make the distribution more normal. For example, when its lambda parameter is 0, it’s equivalent to the log-transformation. It’s used to stabilize the variance (eliminate heteroskedasticity) and normalize the distribution.
Q. What are 3 data preprocessing techniques to handle outliers?
Answer:
- Winsorize (cap at threshold).
- Transform to reduce skew (using Box-Cox or similar).
- Remove outliers if you’re certain they are anomalies or measurement errors.
Q. What are 3 ways of reducing dimensionality?
Answer:
- Removing collinear features.
- Performing PCA, ICA, or other forms of algorithmic dimensionality reduction.
- Combining features with feature engineering.
Q. How much data should you allocate for your training, validation, and test sets?
Answer:
You have to find a balance, and there’s no right answer for every problem. If your test set is too small, you’ll have an unreliable estimation of model performance (performance statistic will have high variance). If your training set is too small, your actual model parameters will have high variance. A good rule of thumb is to use an 80/20 train/test split. Then, your train set can be further split into train/validation or into partitions for cross-validation.
Q. If you split your data into train/test splits, is it still possible to overfit your model?
Answer:
Yes, it’s definitely possible. One common beginner mistake is re-tuning a model or training new models with different parameters after seeing its performance on the test set. In this case, it’s the model selection process that causes the overfitting. The test set should not be tainted until you’re ready to make your final selection.
Q. What are the advantages and disadvantages of decision trees?
Answer:
Advantages: Decision trees are easy to interpret, nonparametric (which means they are robust to outliers), and there are relatively few parameters to tune. Disadvantages: Decision trees are prone to be over fit. However, this can be addressed by ensemble methods like random forests or boosted trees.
Q. What are the advantages and disadvantages of neural networks?
Answer:
Advantages: Neural networks (specifically deep NNs) have led to performance breakthroughs for unstructured datasets such as images, audio, and video. Their incredible flexibility allows them to learn patterns that no other ML algorithm can learn.
Disadvantages: However, they require a large amount of training data to converge. It’s also difficult to pick the right architecture, and the internal “hidden” layers are incomprehensible.
Q. How can you choose a classifier based on training set size?
Answer:
If training set is small, high bias / low variance models (e.g. Naive Bayes) tend to perform better because they are less likely to be over fit. If training set is large, low bias / high variance models (e.g. Logistic Regression) tend to perform better because they can reflect more complex relationships.
Q. Explain Latent Dirichlet Allocation (LDA).
Answer:
Latent Dirichlet Allocation (LDA) is a common method of topic modeling, or classifying documents by subject matter.
LDA is a generative model that represents documents as a mixture of topics that each have their own probability distribution of possible words.
The “Dirichlet” distribution is simply a distribution of distributions. In LDA, documents are distributions of topics that are distributions of words.
Q. Explain Principle Component Analysis (PCA).
Answer:
PCA is a method for transforming features in a dataset by combining them into uncorrelated linear combinations. These new features, or principal components, sequentially maximize the variance represented (i.e. the first principal component has the most variance, the second principal component has the second most, and so on). As a result, PCA is useful for dimensionality reduction because you can set an arbitrary variance cutoff.
Q. What is the ROC Curve and what is AUC (a.k.a. AUROC)?
Answer:
The ROC (receiver operating characteristic) the performance plot for binary classifiers of True Positive Rate (y-axis) vs. False Positive Rate (x-axis). AUC is area under the ROC curve, and it’s a common performance metric for evaluating binary classification models. It’s equivalent to the expected probability that a uniformly drawn random positive is ranked before a uniformly drawn random negative.
Q. Why is Area Under ROC Curve (AUROC) better than raw accuracy as an out-of- sample evaluation metric?
Answer:
AUROC is robust to class imbalance, unlike raw accuracy. For example, if you want to detect a type of cancer that’s prevalent in only 1% of the population, you can build a model that achieves 99% accuracy by simply classifying everyone has cancer-free.
Q. Why are ensemble methods superior to individual models?
Answer:
They average out biases, reduce variance, and are less likely to over fit.
There’s a common line in machine learning which is: “ensemble and get 2%.”
This implies that you can build your models as usual and typically expect a small performance boost from ensembling.
Q. Explain bagging.
Answer:
Bagging, or Bootstrap Aggregating, is an ensemble method in which the dataset is first divided into multiple subsets through resampling. Then, each subset is used to train a model, and the final predictions are made through voting or averaging the component models. Bagging is performed in parallel.
Q. What are some key business metrics for (S-a-a-S startup | Retail bank | e-Commerce site)?
Answer:
Thinking about key business metrics, often shortened as KPI’s (Key Performance Indicators), is an essential part of a data scientist’s job. Here are a few examples, but you should practice brainstorming your own.
Tip: When in doubt, start with the easier question of “how does this business make money?”
- S-a-a-S startup: Customer lifetime value, new accounts, account lifetime, churn rate, usage rate, social share rate
- Retail bank: Offline leads, online leads, new accounts (segmented by account type), risk factors, product affinities
- e-Commerce: Product sales, average cart value, cart abandonment rate, email leads, conversion rate
Q. How can you help our marketing team be more efficient?
Answer:
The answer will depend on the type of company. Here are some examples.
- Clustering algorithms to build custom customer segments for each type of marketing campaign.
- Natural language processing for headlines to predict performance before running ad spend.
- Predict conversion probability based on a user’s website behavior in order to create better re-targeting campaigns.