I've implemented functional induction theorems in Lean, shipping with the upcoming version 4.8.0, and wrote a tutorial-style blog post about it: https://lean-lang.org/blog/2024-5-17-functional-induction/
(h't to David Christiansen for the tooling behind the hover features.) #lean#leanlang
A weird thing about being 50 is that there are programming languages that I've used regularly for longer than some of the software developers I work with have been alive. I first wrote BASIC code in the 1980s. The first time I wrote an expression evaluator--a fairly standard programming puzzle or homework--was in 1990. I wrote it in Pascal for an undergraduate homework assignment. I first wrote perl in the early 1990s, when it was still perl 4.036 (5.38.2 now). I first wrote java in 1995-ish, when it was still java 1.0 (1.21 now). I first wrote scala, which I still use for most things today, in 2013-ish, when it was still scala 2.8 (3.4.0 now). At various times I've been "fluent" in 8086 assembly, BASIC, C, Pascal, perl, python, java, scala; and passable in LISP/Scheme, Prolog, old school Mathematica, (early days) Objective C, matlab/octave, and R. I've written a few lines of Fortran and more than a few lines of COBOL that I ran in a production system once. I could probably write a bit of Haskell if pressed but for some reason I really dislike its syntax so I've never been enthusiastic about learning it well. I've experimented with Clean, Flix, Curry, Unison, Factor, and Joy and learned bits and pieces of each of those. I'm trying to decide whether I should try learning Idris, Agda, and/or Lean. I'm pretty sure I'm forgetting a few languages. Bit of 6502 assembly long ago. Bit of Unix/Linux shell scripting languages (old enough to have lived and breathed tcsh before switching to bash; I use fish now mostly).
When I say passable: in graduate school I wrote a Prolog interpreter in java (including parsing source code or REPL input), within which I could run the classic examples like append or (very simple) symbolic differentiation/integration. As an undergraduate I wrote a Mathematica program to solve the word recognition problem for context-free formal languages. But I'd need some study time to be able to write these languages again.
I don't know what the hell prompted me to reminisce about programming languages. I hope it doesn't come off as a humblebrag but rather like old guy spinning yarns. I think I've been through so many because I'm never quite happy with any one of them and because I've had a varied career that started when I was pretty young.
I guess I'm also half hoping to find people on here who have similar interests so I'm going to riddle this post with hashtags:
A couple months ago, another mathematician contacted me and two of my co-authors (Green and Manners) regarding a minor mathematical misprint in one of our papers. Normally this is quite a routine occurrence, but it caused a surprising amount of existential panic on my part because I thought it involved the #PFR paper that I had run a #Lean formalization project in. As it turned out, though, the misprint involved a previous paper, in a portion that was not formalized in Lean. So all was well; we thanked the mathematician and updated the paper accordingly.
But yesterday, we received referee reports for the PFR paper that was formalized in Lean, and one of the referees did actually spot a genuine mathematical typo (specifically, the expression H[A]-H[B] appearing in (A.22) of https://arxiv.org/abs/2311.05762 should be H[A]+H[B]). So this again created a moment of panic - how could Lean have missed this?
After reviewing the Github history for the blueprint, I found that when I transcribed the proof from the paper to blueprint form, I had unwittingly corrected this typo (see Equation (9) of https://teorth.github.io/pfr/blueprint/sect0003.html in the proof of Lemma 3.23) without noticing that the typo was present in the original source paper. This lemma was then formalized by other contributors without difficulty. I don't remember my actual thought process during the transcription, but I imagine it is similar to how when reading in one's native language, one can often autocorrect spelling and grammar errors in the text without even noticing that one is doing so. Still, the experience gives me just a little pause regarding how confident one can be in the 100% correctness of a paper that was formalized...
There might be train driver strikes in Germany soon again. I guess if I want to give my #lean tutorial at #bobkonf2024 in presence, I have to start cycling soon…
Looking for good articles/resources on facilitating a discussion on balancing a shared feeling of progress with a sense of urgency. #softwaredev#lean#agile#scrum
@andreclaassen das #fediverse ist so voller verschiedener sachen, #firefish#gotosocial#castopod da kann man soviel ausprobieren von motivierten leuten programmiertes, da hab ich persoenlich gar keinen bedarf zu schauen, was die milliardaere des planetens so programmieren lassen.
wir sind zwar nicht mehr, aber vielleicht genug ;)
A new #Lean formalization project led by Alex Kontorovich and myself has just been announced to formalize the proof of the prime number theorem, as well as much of the attendant supporting machinery in complex analysis and analytic number theory, with the plan to then go onward and establish further results such as the Chebotarev density theorem. The repository for the project (including the blueprint) is at https://github.com/AlexKontorovich/PrimeNumberTheoremAnd , and discussion will take place at this Zulip stream: https://leanprover.zulipchat.com/#narrow/stream/423402-PrimeNumberTheorem.2B
You know how to figure out how much is in a sprint? Work as fast as you can, then in two weeks see how much you did. What would you have done better if you had estimated the individual stories?