Breaking Plateaus

(This article originally appeared in the Master Skill Newsletter for April 14, 2023.)

Hi, it’s Aiden.

I’ve been studying artificial intelligence recently.

And, don’t worry, this isn’t going to be some ChatGPT-written email.

(Though I did ask ChatGPT to copy my style earlier this week and I put the result on the website. Pretty entertaining, and the ending is surprisingly insightful.)

I’m curious about AI. Very curious in fact.

Because at its core, AI are learning machines. And we can discover a great deal about how we learn by watching their methods.

It’s like a warped mirror for our own mental processes. Different, but comparable.

There’s so much we can take away.

I want to talk about learning plateaus. And what AI teaches us about dealing with them.

Chess has a horrible reputation for brutal, lengthy, disheartening plateaus.

Plateaus are a part of learning anything, but the plateaus in Chess for most players seem… extreme.

A lot of that comes down to the methods most people use to study Chess. (But I won’t go into that here.)

There are plenty of other factors too.

Did you know AI has learning plateaus like we do? I didn’t.

AI plateaus have presented serious challenges for engineers since AI investigation began.

There was one type of plateau in particular they struggled with, and it’s one I know I’ve struggled with too. I hadn’t recognized what it was until I read this.

It’s called the “local minimum”.

It’s best described with an example.

Picture yourself as an ant at the top of a mountain. Your goal is to get to the bottom of the mountain.

But you can only see what’s directly in front of you. Your eyes can’t see all the way to the bottom. You’re not sure exactly how far you have to go.

So you get started. You look at your options for your first steps.

One path clearly heads downward the fastest, so you start that way.

You reach a fork in the path. One path angles upward and the other continues down.

Going up would be taking you away from your goal, so you choose the downward path.

You do this over and over again as you continue down the mountain. You pick the paths that head down, reject the paths that head back up.

Eventually you come to another fork in the road. But there’s a problem-

Both paths angle back up the mountain.

Going back up the mountain would undo some of your progress. And you can’t have that!

So you keep looking around you, trying to find a way to continue down.

Working harder and harder to find someway down.

You don’t stop trying, but you make no progress.

Unknown to you, over the next ridge is a fast drop. A swift path down the next chunk of mountain.

You can’t see it where you are, but it’s there.

It just requires you to go back upward for a bit to reach it.

But because heading upward loses progress, you never find it.

You’re stuck.

Your ant-self is trapped in what’s called the local minimum.

It’s nowhere near your goal, but because every forward step requires you to lose progress, you stop.

It’s like this:

AI run into local minima all the time.

They tweak and tweak and tweak a process until any further tweak would lose progress.

But if they would push through the initial loss of progress, they could reach a whole new level.

I’ve never heard the concept outside of AI, but I know it applies to plenty of the plateaus I’ve been in.

When we’re in a plateau, it’s easy to hold onto the methods we used to reach that point. To keep looking for the direct way down when there isn’t one.

After all the work to reach this point on the mountain, going back up, even a little bit, feels too painful.

AI engineers solved the problem of the local minimum by introducing some randomness.

Essentially, they forced the AI to deal with new information and entertain new ideas. To think a little bigger picture and not to get tunnel vision.

And with input of new ideas and new things to try, the AI breaks the plateau, avoids the local minimum.

It continues steady progress towards its goal.

Breaking a plateau requires a willingness to try new ideas, and the courage to push through an adjustment phase.

That’s a powerful message for me. I hope it resonates with you too.

In case the new ideas you want to try is visualization training, you can sign up for the full Don’t Move Training System here.

And in case you do find yourself stuck in a plateau:

You’ve got this. I believe in you.

Here’s to the journey,

Aiden


P.S.

To go deeper into this idea, I recommend the book “How to Learn” by Stanislas Dehaene. I’m not very far into it yet, but the insights on AI learning are blowing my mind.

And this is a fascinating (but very technical) article on solving issues with local minima for deep learning algorithms. If you’re technically minded and into computer science, you might find it interesting. (It’s also where I got the idea for the ant example.)

Aiden

Aiden

Scroll to Top