Stochastic Gradient Descent

Kinder Chen
2 min readSep 9, 2021

--

Stochastic Gradient Descent picks a random instance in the training set at every step and computes the gradients based only on that single instance. Working on a single instance at a time makes the algorithm much faster because it has very little data to manipulate at every iteration. It also makes it possible to train on huge training sets, since only one instance needs to be in memory at each iteration.

Due to its stochastic or random nature, this algorithm is much less regular. Instead of gently decreasing until it reaches the minimum, the cost function will bounce up and down, decreasing only on average. Over time it will end up very close to the minimum, but once it gets there it will continue to bounce around, never settling down. So once the algorithm stops, the final parameter values are good, but not optimal. When the cost function is very irregular, this can actually help the algorithm jump out of local minima, so Stochastic Gradient Descent has a better chance of finding the global minimum. Therefore, randomness is good to escape from local optima, but bad because it means that the algorithm can never settle at the minimum. One solution to this dilemma is to gradually reduce the learning rate. The steps start out large, then get smaller and smaller, allowing the algorithm to settle at the global minimum.

The function that determines the learning rate at each iteration is called the learning schedule. If the learning rate is reduced too quickly, you may get stuck in a local minimum, or even end up frozen halfway to the minimum. If the learning rate is reduced too slowly, you may jump around the minimum for a long time and end up with a suboptimal solution if you halt training too early. By convention we iterate by rounds of m iterations; each round is called an epoch. Since instances are picked randomly, some instances may be picked several times per epoch, while others may not be picked at all. If we want to be sure that the algorithm goes through every instance at each epoch, another approach is to shuffle the training set and make sure to shuffle the input features and the labels jointly, then go through it instance by instance, then shuffle it again.

--

--

Kinder Chen
Kinder Chen

Written by Kinder Chen

What happened couldn’t have happened any other way…

No responses yet