An argument from Justine Tunney's LC implementation [1]:
> Programs written in the binary lambda calculus are outrageously small. For example, its metacircular evaluator is 232 bits. [...] Something like this could have practical applications in compression formats, which need a small busy beaver that can produce large amounts of data. It's also just downright cool.
> For example, if you built a compression tool you could have it encode a file as a lambda expression that generates it. Since it's difficult to introduce a new compression format that most people haven't installed, you could prefix the compressed file with this 383 byte interpreter to get autonomous self-extracting archives that anyone can use.
I partway expect @tromp to show up in this thread, but see his excellent site[2] for discussion of the algorithmic information theoretical applications of LC, exploration of Kolmogorov complexity, etc.
jart is on a whole other level. I love the seeing the stuff she puts out and the writings she does. So, so, fun to be a jaw-dropped onlooker. Not to mention the tools are genuinely useful and incredibly cool. Llamafiles, alone, should win some kind of AI accessibility award.
It shows how you can compute factorials in pure lambda calculus, and actually get it to run on a real computer in non-geologic time. The lessons learned from the program transformations along the way can be applied to real programs.
There is one very practical use of the lambda calculus (and related calculi), which is to check whether your language implements the logic of functions correctly. For example, does your language support a polymorphic identity function, and if so, what happens when you apply a polymorphic identity function to a polymorphic identity function? What _should_ happen is easy to check in the lambda calculus, but it’s easy to not think this all the way through in an interpreter or compiler implementation. Rewriting is a pretty slow evaluation procedure, so we often using a machine’s native call stack instead, but doing so can get us in trouble.
Functions in plenty of languages do not work as the lambda calculus suggests they should, and those languages are mostly fine, but when user code gets dicey, they can operate in surprising and counterintuitive ways. What I like about the lambda calculus is that it captures, in an unbelievably small package, many of the things that we intuitively want a language to do.
I'm sorry that I didn't get the chance to reply to your original comment before the downvote brigade chilled your speech. If I recall correctly, your question was an interesting jumping off point for a discussion about the difference between information theoretic "simplicity" and code that is readable and simple to reason about.
Lisp seems to have a sort of almost mystical promise that maybe in some fundamental, mathematical point in conceptspace, the two forms of "simplicity" converge. I think that Lambda calculus, like quantum mechanics, gives the lie to this promise.
The Church numeral for 9 is not a very readable symbol for human beings, but the Church numerals as expressed in binary are more compact than an ASCII or UTF-8 representation of those numerals, and once they've been defined along with a few operations, you get big swaths of number theory stuff "for free".
Disclaimer: just a guy on HN, no credentials
Edit: Also, I starred your eli repository, it looks really interesting!
I remember having memorised lambda calculus solving techniques to pass my CS exams on paper where the 2-forearm wide lines had to be wrapped multiple times.
And now, just like a decade ago, my eyes glaze over when I read about this.
I can easily reason about functions that return functions that return functions but this just completely goes over my head.
I certainly appreciate the theoretical beauty of lambda calculus, but I do wonder what the point is from a modern implementation perspective.
Most hobby Lisps I run into seem to be more of a lambda calculus rite of passage than aiming for practical use.
I've implemented several Lisp interpreters myself in different languages, very much focused on practical use, and they don't look anything like this.
https://github.com/codr7/eli
An argument from Justine Tunney's LC implementation [1]:
> Programs written in the binary lambda calculus are outrageously small. For example, its metacircular evaluator is 232 bits. [...] Something like this could have practical applications in compression formats, which need a small busy beaver that can produce large amounts of data. It's also just downright cool.
> For example, if you built a compression tool you could have it encode a file as a lambda expression that generates it. Since it's difficult to introduce a new compression format that most people haven't installed, you could prefix the compressed file with this 383 byte interpreter to get autonomous self-extracting archives that anyone can use.
I partway expect @tromp to show up in this thread, but see his excellent site[2] for discussion of the algorithmic information theoretical applications of LC, exploration of Kolmogorov complexity, etc.
[edit: spelling]Every time I read anything by jart, I am reminded that there is a level of intellect so far above most that it doesn’t even make sense.
“I shall write a brainfuck interpreter in Lambda Calculus for fun.” Sure, sure, as one does.
jart is on a whole other level. I love the seeing the stuff she puts out and the writings she does. So, so, fun to be a jaw-dropped onlooker. Not to mention the tools are genuinely useful and incredibly cool. Llamafiles, alone, should win some kind of AI accessibility award.
Very interesting, thank you!
Take a look at this:
https://flownet.com/ron/lambda-calculus.html
or the video version:
https://www.youtube.com/watch?v=8qC1iZN5ozw
It shows how you can compute factorials in pure lambda calculus, and actually get it to run on a real computer in non-geologic time. The lessons learned from the program transformations along the way can be applied to real programs.
There is one very practical use of the lambda calculus (and related calculi), which is to check whether your language implements the logic of functions correctly. For example, does your language support a polymorphic identity function, and if so, what happens when you apply a polymorphic identity function to a polymorphic identity function? What _should_ happen is easy to check in the lambda calculus, but it’s easy to not think this all the way through in an interpreter or compiler implementation. Rewriting is a pretty slow evaluation procedure, so we often using a machine’s native call stack instead, but doing so can get us in trouble.
Functions in plenty of languages do not work as the lambda calculus suggests they should, and those languages are mostly fine, but when user code gets dicey, they can operate in surprising and counterintuitive ways. What I like about the lambda calculus is that it captures, in an unbelievably small package, many of the things that we intuitively want a language to do.
Downvotes?
HN never ceases to surprise me!
I'm sorry that I didn't get the chance to reply to your original comment before the downvote brigade chilled your speech. If I recall correctly, your question was an interesting jumping off point for a discussion about the difference between information theoretic "simplicity" and code that is readable and simple to reason about.
Lisp seems to have a sort of almost mystical promise that maybe in some fundamental, mathematical point in conceptspace, the two forms of "simplicity" converge. I think that Lambda calculus, like quantum mechanics, gives the lie to this promise.
The Church numeral for 9 is not a very readable symbol for human beings, but the Church numerals as expressed in binary are more compact than an ASCII or UTF-8 representation of those numerals, and once they've been defined along with a few operations, you get big swaths of number theory stuff "for free".
Disclaimer: just a guy on HN, no credentials
Edit: Also, I starred your eli repository, it looks really interesting!
Hope restored, much appreciated :)
The Byte magazine referenced in the article:
Byte Magazine Volume 04 Number 08 - LISP
https://archive.org/details/byte-magazine-1979-08
I remember having memorised lambda calculus solving techniques to pass my CS exams on paper where the 2-forearm wide lines had to be wrapped multiple times.
And now, just like a decade ago, my eyes glaze over when I read about this.
I can easily reason about functions that return functions that return functions but this just completely goes over my head.
[dead]