Artificial intelligence backpropagation of errors

02 Apr 2018 | Tags: , | Posted by Doug Rose

I have a very vivid memory of being a kid splitting a small bag of jellybeans with my friend. We were very good at sharing the bag. He would eat two, and then I would eat two. We worked together to empty the bag.

As we ate our way down, I noticed that my friend was ignoring the black jellybeans. So when we got close to the bottom the number of these beans increased. I asked him why he left the black candy beans.

He said he knew that those were my favorite, so he was saving them all for me. I didn’t have any memory of eating a black jellybean, but since he said they were my favorite I was anxious to try them. Without thinking I drew out two black beans, popped them into my mouth and began to chew. These little beans were one of the vilest things I’d ever tasted. They tasted like a mixture of soap, bug spray and candles. I spit them out into the bag, ruining the rest of the jellybeans.

From that day forward I was deeply suspicious of any of the darker colored jellybeans. I figured I had made a calculation error by eating those two black jellybeans, so I set out to correct the error by staying closer to the other end of the color gradient.

My friends and family encouraged me to move further down the color gradient. I delved into more experimental colors like green, red and even purple. Each time I achieved some success with a darker color I would go further down the gradient.

I wasn’t thinking about it at the time, but I was actually using a gradient descent to do a form of backpropagation. Backpropagation (backprop for short) is a popular way to optimize the gradient descent by adjusting the weights of the connections between neurons. These algorithms twist the dials of your artificial neural network to gradually produce more accurate outputs.

To understand gradient descent, imagine a color gradient that shows a shaded progression of colors from white to black.

Gradient Descent

In my jellybean example, I gradually moved along this gradient choosing darker and darker colors. I didn’t move on to a darker color until I knew that the jellybean I had just eaten was safe. In other words, I made tiny adjustments along the color gradient and tasted the jellybeans as a form of feedback to tell me that I was on the right track.

Gradient descent and backpropagation enable adjustments like these in an artificial neural network. If an artificial neural network were learning to expand its menu of jellybeans, it would start out with white jellybeans and would move along the color gradient to sample darker and darker jellybeans. The network would test the flavor of each until it eventually tasted a black jellybean, at which point, the backprop algorithm would kick in and tell the neural network that it had gone too far on the color gradient. (Keep in mind that backprop is typically only used for supervised learning.)

These algorithms work by making adjustments to the weights of each connection. Each neuron in an artificial neural network has a weight (typically zero through one), which shows its strength to a previous or succeeding neuron. The closer the weight is to the number one the stronger the connection. The closer the weight is to zero the weaker the connection. A neural network adjusts the weights of these connections over time as a way to match different patterns. A strong connection shows a clear match. A weaker connection shows only a possible match or no match at all.

Strong Connections

With supervised learning, you need a way to let the neural network know when it has made a mistake — when it has failed to identify a match or has falsely identified a match. Suppose the neural network mistakes a purple jellybean for a black jellybean. The backprop algorithm tweaks the weight of the neural connections to reduce the possibility that the neural network will make this same mistake in the future.

Remember that my friends and family had to coax me into trying darker color jellybeans. The same is true with an artificial neural network. A human being has to identify the white and black jellybeans and then help the network twist the dials of the gradient to expand its menu of acceptable jellybeans.

Back to Posts