I too thought the same that changing the learning rate should not increase the error, but the only change I have made in the program is the learning rate and as a result of it, the error plot changed.
To understand why this was happening, I tried taking a large value of learning rate in a linear regression problem with single feature and I found out that the while gradient[0], responsible for theta[0] was decreasing with iterations as expected, gradient[1], responsible for theta[1] was increasing. As a result the slope of our best line started moving away from the scatter points of training data, and that led to an increase in error.
Thanks for the help, anyways.