Forward Propagation MLP

The formula for Z involves transposing the weights matrix and then using it. However, in the code we write:
W1 = np.random.randn([input_size,layers[0]])
and
z1 = np.dot(x,W1) + b1

Then why isn’t W1 transposed here ?

Hey varun,
I know sometimes code is bit different from the formula in the notes. This is because of the dimensions of vectors whether it is column vector or a row vector. In both cases formula will get different.

To verify, always check the shapes of the matrix or vector while doing the dot product.
Here z1 = np.dot(x,W1) + b1
x - > m,n
W1 -> n,m
b1 -> m,1

np.dot((m,n), (n,m)) This looks fine. isn’t it? if you do transpose of W1 it would make mistake

Many a times you will see changes in the formula depend on the person to person how they are taking shapes of matrices

Hope this clear your doubt
Thanks :slight_smile: