N this video what is relu?

here we use activation as sigmoid … so what is relu??? have i missed anything?? and for loss we use epoch so here what is ‘binary_crossentropy’. ? and in previous videos we saw model as neural network here we use model as sequential???what is sequential model?

Relu is also an activation function which is highly preferred over other functions these days. It is defined as y = max{0,x}.

Advantages of relu:

  • Biological plausibility: One-sided, compared to the antisymmetry of tanh.
  • Sparse activation: For example, in a randomly initialized network, only about 50% of hidden units are activated (having a non-zero output).
  • Better gradient propagation: Fewer vanishing gradient problems compared to sigmoidal activation functions that saturate in both directions.
  • Efficient computation: Only comparison, addition and multiplication.

.

Epoch is a totally different concept as compared to a loss function. Epochs refer to the number of iterations over the entire training dataset which we undergo whole training a model. While, binary_crossentropy is a loss function which we use in the case of two y classes only.

.

The simplest model in an Artificial Neural Network is defined in the Sequential class which is a linear stack of Layers.