Hot take: the FP schism between Haskell and Lisp/Scheme is not *really* about static vs dynamic types, or purity vs effects; it's about extensional vs syntactic compositionality. 1/🧵
Extensionality means (to me, at least) that your expressions are to be seen as mere pointers into a Platonic "semantic domain" where all equivalent objects are truly the same (formally, a set quotiented by equivalence; even more formally, a 0-truncation of a higher groupoid) 2/🧵
The syntactic approach is to take expressions exactly as they are. While in the extensional philosophy, an expression in code REFERS to data, in the syntactic philosophy, the code IS the data. Two pieces of code are equal if and only if they have the same syntax tree. 3/🧵
Now, if you want syntax trees to be the primary objects you work with, and you want to be able to compose them and manipulate them elegantly, you want a very small grammar where there is really only one kind of thing to handle. In Lisp this thing is called a "cons cell". 4/🧵
In the extensional philosophy, the purpose of the syntax is just to *refer* to some Platonic ideal in the semantic domain, so you want as many tools as you can get into the grammar to be able to refer elegantly. Only the compiler implementation teams need to pay the price. 5/🧵
(Incidentally, this is why there is basically only one Haskell implementation, and basically infinitely many Lisp implementations.) 6/🧵
In the extensional philosophy, as soon as code is written, the code itself should be stripped away—its "references" followed—to reveal the best equivalent computable representations for the semantics it expresses. To know what representations to use, we need static types. 7/🧵
Furthermore, computable representations of objects that also can depend on runtime input, or produce arbitrary side effects, get very constrained. They basically have to take the shape of code! Then equivalence becomes undecidable, case analysis is intractable… Ergo purity. 8/🧵
The advantage of the Lisp/syntactic approach is that all affordances available to the programmer are also available to every function (at runtime). Functions can perform effects, redefine themselves, and inspect the internal structures of themselves & their inputs. 9/🧵
The type theory community has made great strides understanding the extensional semantics of more Lisp-like values.
— Gradual type theory, for values with non-static types https://arxiv.org/pdf/1811.02440.pdf
— Effect systems, for values with impure side effects https://dl.acm.org/doi/pdf/10.1145/3428194
10/🧵
However, neither of those help with the Lisp affordance to inspect the internal syntactic structure of functions at runtime. This thread was motivated by thinking of a possible direction for that... 11/🧵
From an extensional perspective, what we're anxious about with direct syntactic inspection is violations of the principle that equivalent inputs yield equivalent outputs. But Higher Inductive Types (see https://dl.acm.org/doi/pdf/10.1145/3290314) give a way of checking that kind of principle. 12/🧵
Unfortunately Higher Inductive Types are more like data types than function types; maybe we need "Higher Coinductive Types" for behavioral equivalence. But I look forward to when higher-order functions can use syntactic metaprogramming iff it respects semantic equivalences. 13/🧵
In conclusion, I'm long type-theory in the end: I think higher-categorical semantics has a lot to offer, is unlimited in what it can theoretically accommodate, & a lot of good theory is ready today. But some of what Lisp can do is still open research for extensionalists. 14/14
You can follow @davidad.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.