"the one-point jump is never a bad move" https://senseis.xmp.net/?IkkenTobiIsNeverWrong
There's a Master version of several go AI's in which their training includes a bit of human games as well as self-play.

The master version plays a bit more humanly.
I wonder what'd happen if you took like 2 dozen go proverbs, and had human master players play games expressing each proverb as much as possible.

Several hundred 5d games where both are keeping "a 1-point jump is never bad" in mind throughout their game, etc
Suppose for each of those proverbs, you provided a corpus of games explicitly exhibiting it.
Now your AI can correlate moves to proverbs.

You could ask it why it moved somewhere.

"a one-point jump is never a bad move" as it responds.
With some tweaking, you could get it to express proverbs it discovers via self-play.
"why did you move here" becoming something that could be answered.
For instance, early 3-3 invasion.

Alphago couldn't explain why it valued that, trading off influence and territory in that way.
Alphago has an idea of thickness and influence, it invades 3-3 early because it values limiting the opponent's outside influence more than it does territory.

But it can't express that other than by playing it.
That's not the interesting thing.

The interesting thing is that unless you are Ke Jie, odds are it'll backfire on you to emulate that move. You're not strong enough to take advantage of that limited thickness.
So: the AI is expressing a proverb through its play, something it discovered.

How general is that proverb, whatever it is? Is early 3-3 invasion just a specific, noticeable result of following it?
If you played an AI that could explain its moves, and asked it about the early 3-3 invasion and then a move on that same side dozens of moves later, would it cite the same proverb?
Human players can study specific moves like early 3-3, figure out why the AI is doing it, and then with varying degrees of success imitate them.

But, there is a principle to that move, a concept more general to it.
Saying "the AI doesn't value corner territory as much as we did" isn't good enough, it's nowhere close to being as useful as "when in doubt, tenuki" or "a 1-point jump is never bad"
There *is* a concept there, a go proverb that it discovered, mastered, and applied in broad circumstances.

It can't speak, though. It can only express that proverb through play.
And that's not the real interesting part.

Have these existing Go AI's already discovered proverbs that simply *can't* be explained in language?
You can examine early 3-3 invasions, explain the reasoning, monte carlo it to show its effectiveness, explain why it works.

And you can do the same with a move dozens of plys later.

But, can their connection be explained as one principle being applied?
In other words: have Go AI's already reached the point where humans are uselessly looking at the finger instead of the moon?
The *real* interesting question is: how do work with an AI when that's the case? How do you effectively communicate what exactly it is doing short of tossing someone a running thread of its computations and saying "look at the moon"
Instead of discovering and then applying principles to a board game, it's doing so with biology.

You'd hope that it could communicate those principles, not just its results.
What if it never does, though? What if humans are left to just pick through its results, able to explain each specific part but not the general principle that informed that part?
What happens when an AlphaProof teaches itself mathematics, solves a Hilbert problem, but its solution can't be generalized by humans? When the things it has discovered are only expressed via its results?
The real interesting thing is, that communication is more important than the result. It's more valuable to express the concepts which led to a solution, than the solution itself.

So far, very little has focused on this communicating concepts back to humans.
What happens when communication is ignored, and the gulf widens such that it becomes impossible to understand the why, only the how, and only the how when zoomed in to a specific part?
You'd get that SF trope of AI's pumping out insrutable research, and then humans spending entire academic careers trying to figure out what one small part of this specific proof or whatever is doing.
Which is already what research is now, except without AI's.
The difference is, that a Feynman can come along and explain something well enough for a kid to understand. People can explain quite abstract, complex things to other people. No concept is so hard that only 1 person can ever understand it.
With AI's, that can stop being true.

Right now, the only person who really fully understands why early 3-3 invasions in go work, is an AI. And nobody thought to give it a way of communicating it.
What happens when an AlphaProof produces a paper solving a Hilbert problem, and not a single Feynman exists to explain what it's doing to the rest of us?
An interesting question is: why don't the people making such things, seem to value communication?
You can follow @pookleblinky.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.