Avdi is a great developer who I have learned a lot from, but I think he is exactly wrong here. A thread. https://twitter.com/avdi/status/1114181834353647616
A syntax error is an error, and errors should be raised at runtime. Therefore, syntax errors should be raised at runtime. This follows from Avdi's argument.
The alternative is to acknowledge that some errors should be raised at runtime and some should be raised at compile-time. Now we're just haggling about which ones.
So you might ask: why is checking syntax at compile-time good? It's good because a file with a syntax error is not even a program, and there's no sense in trying to run it at all. This exact argument applies to type errors.
In fact, it applies to *all* compile-time errors that are faithful predictors of invalid (or incorrect) programs. The sooner you can detect an invalid or incorrect program, the better, period.
There's no value in shipping a syntax error to your users. If you can catch it earlier, there's no point in shipping a type error or a stack overflow or a null pointer error, or ANY invalid or incorrect program.
Does this mean that there's no point in runtime errors at all? Of course not. Runtime errors are for things that you can't detect at compile-time!
The whole point is to design better languages and better compilers so that the set of errors you can't detect at compile-time (i.e., of runtime errors) doesn't have silly things in it like stack overflows and has more interesting things in it like bad business logic.
Yes, you will always have runtime errors. Yes, you will always have runtime failures. But they should be interesting! They shouldn't be because YOU MADE A TYPO AGAIN! We have computers that can check that for us!
I am 100% sure that Resilience Engineering does not say that compile-time errors are bad. https://twitter.com/avdi/status/1114243692590698496
Embracing compile-time errors doesn't mean you can't get good at runtime errors! You'll still get runtime errors! Just better ones.
There is, theoretically, a set of correct programs. We don't know what's in it, and don't even have an oracle that can tell us IF a program is in it. The closest we can get is an oracle that can tell us APPROXIMATELY whether a given program is correct. This is called a compiler.
The thing we should continue to strive to do is to make this oracle better and better at approximating this decision correctly.
Ok, I'll try one last argument. If you don't think catching errors before runtime is valuable, why are you writing tests?
My belief, if you care, is that compiler-time vs runtime is exactly like types vs tests: types are for properties that the computer can check for us; tests are for properties that we check for ourselves. For as long as humans are needed to write programs, both will be important!
Coming back to this thread, I think the thing that makes me the saddest is this idea that compilers exist to slap your wrist when you make a mistake when in reality they allow you to have a conversation with your computer in exactly the same way as test-driven development.
When you write a test, you are saying "here is a property I expect this program to have". Then you write the program, and you see if it has that property. This is *exactly* what you do when you write a type. Types are literally statements that a program behaves a certain way.
I honestly think that Java is the main reason programmers today conflate type systems with pain and unwanted restrictiveness. We tried to walk before we could run. We should have waited until we had modern, ergonomic type systems and humane error messages.
Uh, tried to run before we could walk, obviously. How do metaphors work?
Fast-forward to today and I think that parametricity is right up there with inference and higher-kinded types as must-have features of a modern type system. We just weren't ready.
(Notice that most of the complaints about generics were because of poor developer ergonomics (bad error messages, etc.)? That is not a coincidence.)
I ended up continuing this thread in a response thread. https://twitter.com/ReinH/status/1115300835330625536
I'll also point out that under Avdi's argument, C is preferable to Rust because C defers more errors to runtime. This is, frankly, a regressive position. It would prefer a 50 year old language to a modern language. Further, it argues that the principles of Rust are misguided.
I simply don't think it's tenable to argue that we shouldn't catch (say) use after free bugs or deadlocks at compile-time so that operators can learn from experiencing them in production.
Avdi is also making use of a number of straw-man arguments. I don't think anyone actually claims that "all errors should be caught at compile-time" because this is impossible. I don't think the point of compile-time errors is to "eliminate the possibility of breakage".
I think the reasonable position is that when errors can be caught earlier in the software lifecycle—and the cost of catching them is worth paying—then we should try to do that. Whatever errors remain, and there will always be some, should be caught later.
Maybe the thing I want to get across is that a compiler error is not a slap on the wrist, it is the compiler helpfully telling you "what you gave me is not a valid program so I will do you a favor by not trying to run it". We should be thanking the compiler for being so helpful!
The future was here but it was NOT evenly distributed. ML (for MetaLanguage) was a niche academic language used to program proof systems for quite a while before it was used in industry. https://twitter.com/porglezomp/status/1115293514777149440
Also Haskell was coincident with Java but also a niche academic language for some time.
You can follow @ReinH.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.