Stochastic and Mini Batch

Time complexity of Stochastic and Mini Batch is same or not ?
if not ,then what is time complexity of both?
if yes ,then complexity is
Max_iterationsampleno_of_features
??
because in every iteration
we update one feature of theta after
(number_of_samples / batch_size)

that means we are travelling
number_of_samples time for each iteration ans for each feature of dataset

In a sense, both Stochastic and Mini batch gradient descent have the same complexity (max_iter*num_of_samples) but only when we do a fixed number of epochs through the data.
We would want to keep running the algorithm until we converged to a solution, and since both have a different working, each method takes a different amount of time to reach their goals.
Hope this helped :slight_smile:

I hope I’ve cleared your doubt.If you still have some questions or not find the answers satisfactory, you may reopen the doubt.