I support everyone who's helping newcomers to the AI world by explaining complex concepts in simple terms.

I want to advocate for avoiding oversimplifying stuff. It's our job as educators to make content as easy to understand as possible, but not easier.

Here is the reason. 👇
👉 There is point after which simplifications stop being useful, and just become an obstacle for future learning.

A clear-cut example is the infamous "neural networks are brain simulations" analogy.

This analogy breaks up amazingly soon, and just hinders understanding.
An artificial neural network is a computational model that is loosely inspired by biological brains.

The biological analogy works like this: ANNs are composed of simple computational units that exchange information, as in the brain, and we call artificial neurons.

That's it.
Artificial neurons (as used in ML) are far from biological neurons, computationally speaking. But that's not the main friction.

The problem with the analogy is that ANNs are trained via backpropagation, a procedure that has absolutely no biological analogy.

Why this matters?
Because it completely changes what you can do with an ANN.

First, you need extremely simple (read: differentiable) operations inside each neuron. This is because backpropagation works by minimizing a training error (actually no, but kind of), and it requires a smooth gradient.
Second, you need neurons to connect in such a a way there is a order (i.e., they must form a directed acyclic graph), so there are neurons closer to the "beginning" and neurons closer to the "end".
Third, your ANN learns by a very fragile, inefficient procedure, in which it must see the same responses over and over, and even so it's very prone to learn the wrong correlations.

All this makes the biological analogy useless and ultimately dangerous.
If you think of ANNs as brain simulations you might be tempted to believe we are way closer to achieving true intelligence than what we really are, and fall prey of all this "AI is gonna kill us soon" nonsense.

ANNs are much, much, much weaker than any sort of biological brain.
Yes, there is nuance in everything I said. Some models are non-differentiable, some don't require a DAG, some even don't learn by backprop.

But the message remains the same: let's make explanations as simple as possible but not simpler. That only hinders true learning.
You can follow @AlejandroPiad.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.