Choose a topic to test your knowledge and improve your Machine Learning (ML) skills
A supervised scenario is characterized by the concept of a _____.
overlearning causes due to an excessive ______.
Which of the following are several models for feature extraction
_ provides some built-in datasets that can be used for testing purposes.
While using _____ all labels areturned into sequential numbers.
___produce sparse matrices of real numbers that can be fed into any machine learning model.
scikit-learn offers the class______, which is responsible for filling the holes using a strategy based on the mean, median, or frequency
scikit-learn also provides a class for per-sample normalization,_____
_____dataset with many features contains information proportional to the independence of all features and their variance.
In order to assess how much information is brought by each component, and the correlation among them, a useful tool is the_____.
The_____ parameter can assume different values which determine how the data matrix is initially processed.
Which of the following statement is true about outliers in Linear regression?
Let’s say, a “Linear regression” model perfectly fits the training data (train error is zero). Now, Which of the following statement is true?
In a linear regression problem, we are using “R-squared” to measure goodness-of-fit. We add a feature in linear regression model and retrain the same model.Which of the following option is true?
To test linear relationship of y(dependent) and x(independent) continuous variables, which of the following plot best suited?
which of the following step / assumption in regression modeling impacts the trade-off between under-fitting and over-fitting the most.
Which of the following is true about “Ridge” or “Lasso” regression methods in case of feature selection?
Which of the following statement(s) can be true post adding a variable in a linear regression model?1. R-Squared and Adjusted R-squared both increase2. R-Squared increases and Adjusted R-squared decreases3. R-Squared decreases and Adjusted R-squared decreases4. R-Squared decreases and Adjusted R-squared increases
What is/are true about kernel in SVM?1. Kernel function map low dimensional data to high dimensional space2. It’s a similarity function
Suppose you are building a SVM model on data X. The data X can be error prone which means that you should not trust any specific data point too much. Now think that you want to build a SVM model which has quadratic kernel function of polynomial degree 2 that uses Slack variable C as one of it’s hyper parameter.What would happen when you use very small C (C~0)?
The cost parameter in the SVM means:
How do you handle missing or corrupted data in a dataset?
Which of the following statements about Naive Bayes is incorrect?
The SVM’s are less effective when:
If there is only a discrete number of possible outcomes called _____.
Some people are using the term ___ instead of prediction only to avoid the weird idea that machine learning is a sort of modern magic.
The term _____ can be freely used, but with the same meaning adopted in physics or system theory.
Common deep learning applications / problems can also be solved using____
what is the function of ‘Unsupervised Learning’?
In a linear regression problem, we are using “R-squared” to measure goodness-of-fit. We add a feature in linear regression model and retrain the same model.Which of the following option is true?
Suppose we fit “Lasso Regression” to a data set, which has 100 features (X1,X2…X100). Now, we rescale one of these feature by multiplying with 10 (say that feature is X1), and then refit Lasso regression with the same regularization parameter.Now, which of the following option will be correct?
Which of the following is true about “Ridge” or “Lasso” regression methods in case of feature selection?
We can also compute the coefficient of linear regression with the help of an analytical method called “Normal Equation”. Which of the following is/are true about “Normal Equation”?1. We don’t have to choose the learning rate2. It becomes slow when number of features is very large3. No need to iterate
Which of the following option is true regarding “Regression” and “Correlation” ?Note: y is dependent variable and x is independent variable.
Suppose you are building a SVM model on data X. The data X can be error prone which means that you should not trust any specific data point too much. Now think that you want to build a SVM model which has quadratic kernel function of polynomial degree 2 that uses Slack variable C as one of it’s hyper parameter.What would happen when you use very large value of C(C->infinity)?
Hyperplanes are _____________boundaries that help classify the data points.
The _____of the hyperplane depends upon the number of features.
What is the purpose of performing cross-validation?
Which of the following is true about Naive Bayes ?
Which of the following is not supervised learning?
___can be adopted when it's necessary to categorize a large amount of data with a few complete examples or when there's the need to impose some constraints to a clustering algorithm.
In reinforcement learning, this feedback is usually called as___.
In the last decade, many researchers started training bigger and bigger models, built with several different layers that's why this approach is called_____.
Suppose we fit “Lasso Regression” to a data set, which has 100 features (X1,X2…X100). Now, we rescale one of these feature by multiplying with 10 (say that feature is X1), and then refit Lasso regression with the same regularization parameter.Now, which of the following option will be correct?
. If Linear regression model perfectly first i.e., train error is zero, then ____________
In syntax of linear model lm(formula,data,..), data refers to ______
Which of the following option is true regarding “Regression” and “Correlation” ?Note: y is dependent variable and x is independent variable.
Let’s say, you are working with categorical feature(s) and you have not looked at the distribution of the categorical variable in the test data. You want to apply one hot encoding (OHE) on the categorical feature(s). What challenges you may face if you have applied OHE on a categorical variable of train dataset?
__which can accept a NumPy RandomState generator or an integer seed.
. In many classification problems, the target dataset is made up of categorical labels which cannot immediately be processed by any algorithm. An encoding is needed and scikit-learn offers at least_____valid options
_is the most drastic one and should be considered only when the dataset is quite large, the number of missing features is high, and any prediction could be risky.
Suppose you have fitted a complex regression model on a dataset. Now, you are using Ridge regression with tuning parameter lambda to reduce its complexity. Choose the option(s) below which describes relationship of bias and variance with lambda.
Function used for linear regression in R is __________
Suppose, you got a situation where you find that your linear regression model is under fitting the data. In such situation which of the following options would you consider?1. I will add more variables2. I will start introducing polynomial degree variables3. I will remove some variables
We usually use feature normalization before using the Gaussian kernel in SVM. What is true about feature normalization? 1. We do feature normalization so that new feature will dominate other 2. Some times, feature normalization is not feasible in case of categorical variables3. Feature normalization always helps when we use Gaussian kernel in SVM