I am not getting why increased receptive field is necessary
,also non-linearity of relu .totally confused.plus how to measure receptive field,and sir has told maxpool layer will double receptive field how? plz help
Building Convnets 1 - Filter Sizes, Receptive Fields
plus how to measure receptive field?
The term receptive field in layman term means that, you are the amount of information held by the present cell or matrix.
-
Increase receptive field.
You can see how receptive field increased from the above image, 1 cell of III layer contained information about 25 or 55 matrix of first layer, although we applied kernerls of same size i.e. 33. -
Non linearity of relu.
Yes it seems that how can reLU function introduce non linearity, but this is not the truth. Think like that you can approximate any continuous functions with lots of little rectangles. ReLU activations can produced lots of little rectangles. In fact, in practice, ReLU can make rather complicated shapes and approximate many complicated domains. and that is how it introduces non linearity. -
Maxpool layer doubles the receptive field.
Imagive the previous layer has shape of (2m,2m)(not taking channels form easy undersanding) and we apply maxpooling layer of (2*2) than new size will be(m,m) this means 1 cell will contain information about 4 cells in previous layer, so we say that receptive field along width and height is doubled. -
How to measure receptive field.
Just have to calculate receptive field of layers individually than combine all of the previous layers to find the net receptive field, as done in the image.
Hope this helped
I didn’t understand relu and it’s non linearity and how more non-linearity increases more receptive field.
What will be the value of total receptive field here in diagram.
The total receptive field of the given diagram will be 5 * 5.
ReLU is the activation function having its function as 0 if x<0 and x if x>=0.
Consider a 2 layer perceptron network having 1 neuron at each layer(for simplicity) and this network takes single x value as input.
- w1 = 2 and b1 = -6 and reLU activation
- w2 = 3 and b2 = -8 and reLU activation
Case 1: Input x = 4
output of first layer = relu( 24 + (-6)) = relu(2) = 2
output of second layer = relu(23 +(-8)) = relu(-2) = 0
Case 2 Input x = 6
output of first layer = relu(26 + (-6)) = relu(6) = 6
output of second layer = relu(26 +(-8)) = relu(4) = 4
Notice that in both cases we passed positive values but still our model gave different output which does n’t seem related with x in any manner. This is only due to non -linearity of relu function that we were able to separate this values. Similarly this is a very very basic example now imagine actual neural network with multiple layers and multiple neurons having relu function. The non-linearity introduced in that case would be much bigger than that.
Hope this cleared your doubt.
I hope I’ve cleared your doubt. I ask you to please rate your experience here
Your feedback is very important. It helps us improve our platform and hence provide you
the learning experience you deserve.
On the off chance, you still have some questions or not find the answers satisfactory, you may reopen
the doubt.