1200字范文,内容丰富有趣,写作的好帮手!
1200字范文 > Machine Learning week 2 quiz: Linear Regression with Multiple Variables

Machine Learning week 2 quiz: Linear Regression with Multiple Variables

时间:2022-08-15 03:06:36

相关推荐

Machine Learning week 2 quiz: Linear Regression with Multiple Variables

Linear Regression with Multiple Variables

5试题

1.

Supposem=4 students have taken some class, and the class had a midterm exam and a final exam. You have collected a dataset of their scores on the two exams, which is as follows:

You'd like to use polynomial regression to predict a student's final exam score from their midterm exam score. Concretely, suppose you want to fit a model of the formhθ(x)=θ0+θ1x1+θ2x2, wherex1is the midterm score andx2is (midterm score)2. Further, you plan to use both feature scaling (dividing by the "max-min", or range, of a feature) and mean normalization.

What is the normalized featurex(1)1? (Hint: midterm = 89, final = 96 is training example 1.) Please round off your answer to two decimal places and enter in the text box below.

2.

You run gradient descent for 15 iterations

withα=0.3and computeJ(θ)after each

iteration. You find that the value ofJ(θ)increasesover

time. Based on this, which of the following conclusions seems

most plausible?

Rather than use the current value ofα, it'd be more promising to try a smaller value ofα(sayα=0.1).

Rather than use the current value ofα, it'd be more promising to try a larger value ofα(sayα=1.0).

α=0.3is an effective choice of learning rate.

3.

Suppose you havem=23training examples withn=5features (excluding the additional all-ones feature for the intercept term, which you should add). The normal equation isθ=(XTX)−1XTy. For the given values ofmandn, what are the dimensions ofθ,X, andyin this equation?

Xis23×6,yis23×6,θis6×6

Xis23×5,yis23×1,θis5×5

Xis23×6,yis23×1,θis6×1

Xis23×5,yis23×1,θis5×1

4.

Suppose you have a dataset withm=1000000examples andn=200000features for each example. You want to use multivariate linear regression to fit the parametersθto our data. Should you prefer gradient descent or the normal equation?

The normal equation, since gradient descent might be unable to find the optimalθ.

The normal equation, since it provides an efficient way to directly find the solution.

Gradient descent, since it will always converge to the optimalθ.

Gradient descent, since(XTX)−1will be very slow to compute in the normal equation.

5.

Which of the following are reasons for using feature scaling?

It prevents the matrixXTX(used in the normal equation) from being non-invertable (singular/degenerate).

It speeds up gradient descent by making it require fewer iterations to get to a good solution.

It is necessary to prevent the normal equation from getting stuck in local optima.

It speeds up gradient descent by making each iteration of gradient descent less expensive to compute.

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。