Readings shared February 24, 2026. https://jaalonso.github.io/vestigium/posts/2026/02/25-readings_shared_02-24-26 #AI4Math #ATP #Agda #CoqProver #FunctionalProgramming #Haskell #ITP #IsabelleHOL #LLMs #LeanProver #Math #Reasoning #Vampire
Readings shared February 24, 2026. https://jaalonso.github.io/vestigium/posts/2026/02/25-readings_shared_02-24-26 #AI4Math #ATP #Agda #CoqProver #FunctionalProgramming #Haskell #ITP #IsabelleHOL #LLMs #LeanProver #Math #Reasoning #Vampire
When Agda met Vampire. ~ Artjoms Šinkarovs, Michael Rawson. https://arxiv.org/abs/2602.18844v1 #Agda #ITP #Vampire #ATP
Formalized run-time analysis of active learning - coalgebraically in Agda. ~ Thorsten Wißmann. https://arxiv.org/abs/2602.16427 #Agda #ITP
Classifying 2-groups in Homotopy Type Theory. ~ Perry Hart, Owen Milner. https://phart3.github.io/2-groups-preprint.pdf #Agda #ITP
It's getting harder and harder to study maths.
I can't just lay on the bed and read a book or a paper any more 😂.
Have to check the proofs and try them in #agda myself.
OH "you might find it fun to pick up #Agda. similar enough to Haskell to get started; different enough to be interesting plfa.github.io "
#plt
Programming Language Foundatio...
Readings shared February 14, 2026. https://jaalonso.github.io/vestigium/posts/2026/02/15-readings_shared_02-14-26 #AI4Math #Agda #Clojure #CoqProver #FunctionalProgramming #Haskell #ITP #LeanProver #Lisp #Logic #Math #Minlog #Prolog #RustLang
Generalized decidability via Brouwer trees. ~ Tom de Jong, Nicolai Kraus, Aref Mohammadzadeh, Fredrik Nordvall Forsberg. https://arxiv.org/abs/2602.10844v1 #Agda #ITP
Commutative algebras of series. ~ Lorenzo Clemente. https://arxiv.org/abs/2601.19809v1 #ITP #Agda #Math
The Leibniz adjunction in homotopy type theory, with an application to simplicial type theory. ~ Tom de Jong, Nicolai Kraus, Axel Ljungström. https://arxiv.org/abs/2601.21843v1 #ITP #Agda
I'm pleased that my paper with @MartinEscardo on (counter)examples of injective types is out on arXiv: https://arxiv.org/abs/2601.12536.
This paper took a while to come together, partly because we refined the exposition a few times, partly because we kept coming up with new (counter)examples, and partly because we formalized (nearly) everything in TypeTopology using Agda. But that's okay, because I'm confident that all of these things led to a better paper.
[Abstract in the next post]
I realise there is a lot of momentum behind Lean for maths proofs.
But I'm also aware of a very established community around Agda.
Can someone explain in plain English - and for a complete beginner like me - what the main benefits of Agda are over Lean?
My own small experience with Lean is not advanced at all, I liked what I tried, but what I didn't like was the arbitrariness of finding tactics for your proof step needs. Does Agda do this better?
To elaborate on that - I found that my ability to write proofs depended very much on my ability to know/find tactics - and they don't seem to be organised in a way that makes finding "what you don't know" easier. And the tactic search engines are still not that easy to get working well (or at all, in my case).
Hey, any #idris or #agda people also see that "simple trick" for carrying around the index equalities necessary for a hetrogenous equality?
EDIT (found it!): https://martinescardo.github.io/dependent-equality-lecture/DependentEquality.html
Like, the equality proof I'm building is something like [not real syntax] (x : RVect n Mine) = (y : RVect (n + 0) Mine).
Trying to pattern-match on the recursive proof directly is a K-axoim violation (IIRC), and in any case it's making the type checker unhappy (for, reasons I am _sure_ are good). Trying to apply cong / depCong to the recursive proofs is a type mismatch.
I think it was "as simple" as having as carrying around the equality proof on indexes (?) and I think the presentation I saw also used mixfix to make the syntax look quite nice.
I might be able to recapitulate it myself, but it would be easier and better to just crib from my betters.
To be more specific:
- It's weird to me that when we want to do a proof by induction, we never say so explicitly
- I really don't get what these chained equality things are doing. It's not clear to me what's supposed to be a rewrite rule and what's supposed to just be the same thing repeated
- I don't understand how rewrite rules apply, in that I don't know how to target just one side of an equation, or know what "direction" they're going to run in
I'm wondering if I should work through Certainty by Construction before continuing with PLFA, in the hopes that it will give me a better mental model for how Agda actually works?
I'm a few chapters into "Programming Language Foundations in Agda" and... I'm still struggling to figure out how to write proofs in it, more than I did when learning Rocq or Isabelle or Lean. Granted, I have been very tired for months, but I'm wondering if it's more than that.
Is Agda somehow trickier to write proofs in? Does PLFA introduce techniques in a way that doesn't match what people do in practice? Do I just have a skill issue here?
Readings shared December 29, 2025. https://jaalonso.github.io/vestigium/posts/2025/12/30-readings_shared_12-29-25 #AI #Agda #CoqProver #FunctionalProgramming #Haskell #ITP #IsabelleHOL #LeanProver #Logic #Math #OCaml #Rocq #SMT
NAMOR: a new Agda library for modal extended sequents. ~ Riccardo Borsetto, Margherita Zorzi. https://ceur-ws.org/Vol-4142/paper4.pdf #ITP #Agda
Algebraizing higher-order effects. ~ Martin Andrieux, Alan Schmitt. https://hal.science/hal-05428166/document#page=35 #ITP #Agda