# ▸ Neural Networks - Representation :

Recommended Machine Learning Courses:

1. Which of the following statements are true? Check all that apply.

1. Consider the following neural network which takes two binary-valued inputs
$\inline&space;x_1,x_2&space;\&space;\epsilon&space;\&space;\{0,1\}$ and outputs $\inline&space;h_\theta(x)$. Which of the following logical functions does it (approximately) compute?
• AND
This network outputs approximately 1 only when both inputs are 1.

• NAND (meaning “NOT AND”)

• OR

• XOR (exclusive OR)

1. Consider the following neural network which takes two binary-valued inputs
$\inline&space;x_1,x_2&space;\&space;\epsilon&space;\&space;\{0,1\}$ and outputs $\inline&space;h_\theta(x)$. Which of the following logical functions does it (approximately) compute?
• AND

• NAND (meaning “NOT AND”)

• OR
This network outputs approximately 1 when atleast one input is 1.

• XOR (exclusive OR)

1. Consider the neural network given below. Which of the following equations correctly computes the activation $\inline&space;a_1^{(3)}$? Note: $\inline&space;g(z)$ is the sigmoid activation
function.

1. You have the following neural network:

You’d like to compute the activations of the hidden layer $\inline&space;a^{(2)}&space;\&space;\epsilon&space;\&space;R^3$. One way to do
so is the following Octave code:

You want to have a vectorized implementation of this (i.e., one that does not use for loops). Which of the following implementations correctly compute ? Check all
that apply.

### Check-out our free tutorials on IOT (Internet of Things):

1. You are using the neural network pictured below and have learned the parameters $\inline&space;\theta^{(1)}&space;=&space;\begin{bmatrix}&space;1&space;&&space;1&space;&&space;2.4\\&space;1&space;&&space;1.7&space;&&space;3.2&space;\end{bmatrix}$ (used to compute $\inline&space;a^{(2)}$) and $\inline&space;\theta^{(2)}&space;=&space;\begin{bmatrix}&space;1&space;&&space;0.3&space;&&space;-1.2&space;\end{bmatrix}$ (used to compute $\inline&space;a^{(3)}$ as a function of $\inline&space;a^{(2)}$). Suppose you swap the parameters for the first hidden layer between its two units so $\inline&space;\theta^{(1)}&space;=&space;\begin{bmatrix}&space;1&space;&&space;1.7&space;&&space;3.2&space;\\&space;1&space;&&space;1&space;&&space;2.4&space;\end{bmatrix}$ and also swap the output layer so $\inline&space;\theta^{(2)}&space;=&space;\begin{bmatrix}&space;1&space;&&space;-1.2&space;&&space;0.3&space;\end{bmatrix}$. How will this change the value of the output $\inline&space;h_\theta(x)$?

&
Click here to see more codes for Raspberry Pi 3 and similar Family.
&
Click here to see more codes for NodeMCU ESP8266 and similar Family.
&
Click here to see more codes for Arduino Mega (ATMega 2560) and similar Family.

Feel free to ask doubts in the comment section. I will try my best to answer it.
If you find this helpful by any mean like, comment and share the post.
This is the simplest way to encourage me to keep doing such work.

Thanks & Regards,
- APDaga DumpBox

1. Why I can't represent XOR function without hidden layers? If I have a case like question 2 but with the weights: -10, 20, -20 I would get:

x1 | x2 | xor
0 | 0 | 0
0 | 1 | 1
1 | 0 | 1
1 | 1 | 0

wouldn't I?

1. NO.
If you consider a NN as given in question 2, with weights as -10, 20, -20 with NN equation -10 + (20*x_1) - (20*x_2) > 0. where threshold for activation is 0.
you will get output as follows:

x1 | x2 | Output | (>0 or not)
0 | 0 | -10 | 0
0 | 1 | -30 | 0
1 | 0 | 10 | 1
1 | 1 | -10 | 1

Which doesn't represent XOR at all.

2. please explain 2 one some clearly'i did not undertand what is the target of output.

1. I can't understand what exactly you want to ask.

3. in 2nd question they ask Which of the following logical functions does it (approximately) compute?
out put answer what shoud come to satisfy the truth table. In 2 nd one first bit you answered This network outputs approximately 1 only when both inputs are 1.In bit 2 also same.out put howmuch should come to satisfy the truth table.(that means -30,20,10)

1. Even Though your question is not very clear to me, I am trying to answer it as per my understanding.
1. In Q2(1), Ans is AND gate as Output is 1 when "both" the input are 1.
2. In Q2(2), Ans is OR gate as Output is 1 when "at least one" input is 1.
NOTE: Here we have considered activation threshold = 0.

3. If in Q2, we consider weights as -30, 20, 10 then the ans is AND gate (considering activation threshold = -1)
-30 | 20 | 10
1 | x1 | x2 Output
1 | 0 | 0 = -30 = 0
1 | 0 | 1 = -20 = 0
1 | 1 | 0 = -10 = 0
1 | 1 | 1 = 0 = 1

NOTE: If we consider activation threshold=0 here, then it will be ambiguous for x1=1, x2=1 case. It won't represent AND gate or any other logic gate.

4. Why question 1 option 2 is incorrect?

1. The outputs of a neural network are not probabilities, so their sum need not be 1

2. True. But it represents classes. So exactly one class must be true for a training data. So sum of all the values must be 1. isn't it?

5. I didn't get the question numbers 4 and 5 can you please explain in detail?
I mean how did writing the vectorized implementation here work in place of for loop? and similarly in question 5 how do I calculate whether output changes or remains same?

6. consider to the mnn with sigmoidal functions and the training data set x1:0.6,0.2 x2:0.1,0.3 t1:1,0 t2:0,1