There's a phenomenon in AI culture and media that once AI succeeds at something, it's no longer AI. This is known as "The AI Effect."
As Doug Hofstadter put it: "AI is whatever hasn't been done yet."
These things are often spoken with a sense of unfairness, as though (1/)
As Doug Hofstadter put it: "AI is whatever hasn't been done yet."
These things are often spoken with a sense of unfairness, as though (1/)
everyone is just moving the goalposts at the expense of AI researchers.
The idea is that machines have *already* achieved tremendous cognition feats, we just discount them as intelligence because "now the magic is gone." (2/)
The idea is that machines have *already* achieved tremendous cognition feats, we just discount them as intelligence because "now the magic is gone." (2/)
But a deeper argument can be made that maybe things we thought were hard just...aren't as hard as we thought. Maybe the goalpost-movers...have a point. (3/)
I'm very much of the school of thought that, while AI tech has undoubtedly seen huge advances in recent years, we haven't made much (if any) meaningful progress toward systems with true "understanding" (aka abstract reasoning skills, self-awareness, etc). (4/)
Viewed in this way, the so-called "AI effect" is more a reflection of our continual process of refining what intelligence actually is (and is not), rather than critics merely moving the goalposts. (5/)
When Deep Blue beat Garry Kasparov in 1997, what we actually learned is that chess is a lot more computationally "easy" than we thought. Deep Blue didn't go on to write poetry or solve global hunger. (6/)
It's critical to note here that I mean specifically *what we learned about machine intelligence*. These were obviously great advances in computer science. But they weren't generalizable achievements in AI. (7/)
Now, modern reinforcement learning techniques seem to be far more generalizable that earlier achievements like Deep Blue. (8/)
The insights from AlphaZero, MuZero, et al. actually do seem to be applicable to far more generalized domains, so maybe we're starting to edge into more true manifolds of intelligence. Maybe. Or maybe we've just gotten better at building mindless statistical simulacrums. (9/)
Does that mean we should stop moving? Should stop researching? Should give up in our quest for better machine cognition? Of course not. At least not for these reasons! (10/)
But if history is any guide, even today we're still much further from Artificial *Intelligence* than we like to think.
If I'm betting, the safe money is on the side of those who, with each new achievement, indeed keep saying "well then that's just not AI." (/end)
If I'm betting, the safe money is on the side of those who, with each new achievement, indeed keep saying "well then that's just not AI." (/end)