Kids learn concepts like number or logic, but it's not really clear how that's actually possible: what could it mean for a learner (or computer!) to *not* have numbers or logic? Can you build a computer without them? And what do these abstract things mean on a neural level?
Church encoding is an idea from mathematical logic where you use one system (lambda calculus) to represent another (e.g. boolean logic). It's basically the same idea as using sets to build objects that *act like integers*, even though they're, well, sets.
https://en.wikipedia.org/wiki/Set-theoretic_definition_of_natural_numbers
The paper describes a church encoding *learner*. This learner takes some facts/observations about the world and tries to create a church encoding that mirrors the world using its own internal dynamics. A generative, productive, compositional mental model. (ht @yimregister)
My favorite way of showing this idea is people who program songs into old printers. They get the internal, inherent dynamics of a printer to do something else cool. That's basically church encoding. And, I argue, what you do when you learn.
But for brains, we need the most general form of learning possible because people can learn lots of different things. The paper argues that combinatory logic is nice formalism for this because it's Turing-complete and compositional, like much of thought.
The paper shows how domains like number, logic, dominance relations, domain theories, family trees, grammar, recursion, etc. can be constructed by a church-encoding system. And the paper shows that combinatory logic structures often generalize in the right ways.
For instance, in number, you could see the first few numbers and induce a system isomorphic to natural numbers, without having them to start. There are a bunch of (bad) arguments in cogsci claiming that's *not possible* even in principle!
The system generalizes sensibly because constructing these representations is essentially the same as learning short programs that explain the data you see. These ideas connect closely to inductive systems that work over computations, dating back to Solomonoff.
Combinatory logic itself is super cool: it was developed in the early 1900s to avoid using variables in logic. Variables are a goddamn nightmare because they change the *syntax* of the system. You can use "x" and "y" in defining f(x,y), but you only get "x" in defining f(x).
This makes f(x,y)-like notation a pain to handle. What early logicians figured out that in fact you never NEED variables. Instead, you can translate equations with variables into compositions of functions (combinators) that have NO variables anywhere.
This fact that variables are never necessary is a nice argument against the importance of explicit algebra-like variables ( @GaryMarcus). Combinatory logic is a system where variable-like behavior can provably be achieved without any variables at all in the representation.
In this sense, the cog/neuro interest in variables has been misled by the syntax we happened to use in algebra. If math education had adopted combinatory logic-like syntax, we never would have thought that explicit, algebra-like variables were difficult or important.
This connects to neuroscience because variables, structures, and human-like computational flexibility seems hard to get into neural networks. The fact that combinatory logic uses a simple, uniform syntax without variables help it be encodable in biologically-inspired systems.
In fact, all you need to do (in e.g. a neural network) is have a means to represent binary trees and some simple operations on them, both of which have been around in neural networks for several decades. Combinatory logic works like an assembly language.
Then, with those, and the church-encoding idea, all of the structures we talk about in cognitive science--like logic, number, structures, grammars, hierarchies, computational processes, etc.---are within reach.
So, a neural version of the representation is easy in principle; what's missing so far is a neural version of the learning. Any interest, @DeepMind or @OpenAI?
At the same time, church encoding addresses some key questions about meaning. When cognitive scientists state a logical theory like lift(x,y) is cause(x,go(y,up)), what do those things mean e.g. neurally? It's hard to see, and that's a big part of why the fields don't connect.
In church encoding (and related philosophical ideas), the meaning of a symbol is determined by how it interacts with other symbols, which is defined by the combinators' dynamics. So church encoding can avoid dodging fundamental problems about the meaning of symbols.
The paper most generally tries to show how the many seemingly contradictory approaches to cognition are compatible and may interrelate.
You can follow @spiantado.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.