Error function in gradient descent

why is h(x)=(theta0+theta1(x))**2?
why it is squared?

h(X) is always defined as theta(transpose) of X. There’s no square term involved in it. There must be a confusion from side. In the video always, the term theta0 + theta1*X1 is represented as h(X). The square is present when we are defining the loss function and that too, (h(X) - Y)**2.

I hope this clarifies your doubt.

I hope I’ve cleared your doubt. I ask you to please rate your experience here
Your feedback is very important. It helps us improve our platform and hence provide you
the learning experience you deserve.

On the off chance, you still have some questions or not find the answers satisfactory, you may reopen
the doubt.