Coursera: Neural Networks and Deep Learning (Week 4) Quiz [MCQ Answers] - deeplearning.ai

▸ Key concepts on Deep Neural Networks : 

Coursera : Neural Networks and Deep Learning Week 4 MCQ Quiz Answers | APDaga | DumpBox


  1. What is the "cache" used for in our implementation of forward propagation and backward propagation?

    • It is used to cache the intermediate values of the cost function during training.

    • We use it to pass variables computed during forward propagation to the corresponding backward propagation step. It contains useful values for backward propagation to compute derivatives.
      Correct
      Correct, the "cache" records values from the forward propagation units and sends it to the backward propagation units because it is needed to compute the chain rule derivatives.

    • We use it to pass variables computed during backward propagation to the corresponding forward propagation step. It contains useful values for forward propagation to compute activations.

    • It is used to keep track of the hyperparameters that we are searching over, to speed up computation.




  2. Among the following, which ones are "hyperparameters"? (Check all that apply.)

    • learning rate
      Correct

    • number of layers L in the neural network
      Correct

    • weight matrices

    • bias vectors

    • number of iterations
      Correct

    • activation values

    • size of the hidden layers
      Correct




  3. Which of the following statements is true?

    • The deeper layers of a neural network are typically computing more complex features of the input than the earlier layers.
      Correct

    • The earlier layers of a neural network are typically computing more complex features of the input than the deeper layers.




  4. Vectorization allows you to compute forward propagation in an L-layer neural network without an explicit for-loop (or any other explicit iterative loop) over the layers l=1, 2, ...,L. True/False?

    • True

    • False
      Correct
      Forward propagation propagates the input through the layers, although for shallow networks we may just write all the lines in a deeper network, we cannot avoid a for loop iterating over the layers: .




  5. Assume we store the values for in an array called layers, as follows: layer_dims = . So layer 1 has four hidden units, layer 2 has 3 hidden units and so on. Which of the following for-loops will allow you to initialize the parameters for the model?

    • for(i in range(1, len(layer_dims)/2)):
      parameter['W' + str(i)] = np.random.randn(layers[i], layers[i-1])) * 0.01
      parameter['b' + str(i)] = np.random.randn(layers[i], 1) * 0.01


    • for(i in range(1, len(layer_dims)/2)):
      parameter['W' + str(i)] = np.random.randn(layers[i], layers[i-1])) * 0.01
      parameter['b' + str(i)] = np.random.randn(layers[i-1], 1) * 0.01


    • for(i in range(1, len(layer_dims))):
      parameter['W' + str(i)] = np.random.randn(layers[i-1], layers[i])) * 0.01
      parameter['b' + str(i)] = np.random.randn(layers[i], 1) * 0.01


    • for(i in range(1, len(layer_dims))):
      parameter['W' + str(i)] = np.random.randn(layers[i], layers[i-1])) * 0.01
      parameter['b' + str(i)] = np.random.randn(layers[i], 1) * 0.01

      Correct




  6. Consider the following neural network.
    multi layer NN | APDaga | DumpBox | Coursera

    How many layers does this network have?

    • The number of layers L is 4. The number of hidden layers is 3.
      Correct
      Yes. As seen in lecture, the number of layers is counted as the number of hidden layers + 1. The input and output layers are not counted as hidden layers.

    • The number of layers L is 3. The number of hidden layers is 3.

    • The number of layers L is 4. The number of hidden layers is 4.

    • The number of layers L is 5. The number of hidden layers is 4.




  7. During forward propagation, in the forward function for a layer l you need to know what is the activation function in a layer (Sigmoid, tanh, ReLU, etc.). During backpropagation, the corresponding backward function also needs to know what is the activation function for layer l, since the gradient depends on it. True/False?

    • True
      Correct
      Yes, as you've seen in the week 3 each activation has a different derivative. Thus, during backpropagation you need to know which activation was used in the forward propagation to be able to compute the correct derivative.

    • False




  8. There are certain functions with the following properties:
    (i) To compute the function using a shallow network circuit, you will need a large network (where we measure size by the number of logic gates in the network), but
    (ii) To compute it using a deep network circuit, you need only an exponentially smaller network. True/False?

    • True
      Correct

    • False



    Check-out our free tutorials on IOT (Internet of Things):




  9. Consider the following 2 hidden layer neural network:
    2 hidden layer NN | APDaga | DumpBox | Coursera

    Which of the following statements are True? (Check all that apply).

    • will have shape (4, 4)
      Correct
      Yes. More generally, the shape of is .

    • will have shape (4, 1)
      Correct
      Yes. More generally, the shape of is .

    • will have shape (3, 4)

    • will have shape (3, 1)

    • will have shape (3, 4)
      Correct
      Yes. More generally, the shape of is .

    • will have shape (1, 1)

    • will have shape (3, 1)

    • will have shape (3, 1)
      Correct
      Yes. More generally, the shape of is .

    • will have shape (3, 1)

    • will have shape (1, 1)
      Correct
      Yes. More generally, the shape of is .

    • will have shape (1, 3)
      Correct
      Yes. More generally, the shape of is .

    • will have shape (3, 1)




  10. Whereas the previous question used a specific network, in the general case what is the dimension of , the weight matrix associated with layer ?

    • has shape

    • has shape

    • has shape
      Correct
      True

    • has shape




    --------------------------------------------------------------------------------
    Click here to see solutions for all Machine Learning Coursera Assignments.
    &
    Click here to see more codes for Raspberry Pi 3 and similar Family.
    &
    Click here to see more codes for NodeMCU ESP8266 and similar Family.
    &
    Click here to see more codes for Arduino Mega (ATMega 2560) and similar Family.
    Feel free to ask doubts in the comment section. I will try my best to answer it.
    If you find this helpful by any mean like, comment and share the post.
    This is the simplest way to encourage me to keep doing such work.

    Thanks & Regards,

    - APDaga DumpBox
إرسال تعليق (0)
أحدث أقدم