@screwlisp @kentpitman regarding the discussion we had after the #LispyGopherClimate show ended, MiniKanren is logic programming language embedded in Scheme (sort-of like a Prolog implemented in Scheme and coded with S-expressions), and you can use machine leaning methods like neural networks to guide the search tree of the goal solver mechanism. This paper is an example of what I was talking about.
Even before LLMs were invented, MiniKanren was able to do program synthesis using purely symbolic logic. They developed a prototype called Barliman where you would provide example input->output pairs as constraints, and using a constraint solver, could generalize those examples to a function that generates any output for any input. As a simple example, you could give it the following input-output pairs:
- () -> ()
- (a) () -> (a)
- () (a) -> (a)
- (a) (a) -> (a a)
…and the constraint solver could determine that you are trying to implement the append function for lists and write the code automatically — without LLMs, using purely symbolic logic.
As you might expect, the solver could be very slow, or even diverge (never returning an answer). The paper I mentioned above talks about using neural networks to try to guide the constraint solver to improve the performance and usefulness of the results returned by the solver.
Now imagine applying this technique to other domains besides code generation or optimization, for example, auto-completion, or cache pre-fetching, and building it into a programmable computing environment like Emacs. You could have a tool like “Cursor,” but instead of using LLMs, it uses classical computing and constraint solvers, while taking a fraction of the amount of energy that LLMs use.
#tech #software #AI #LLM #MachineLearning #NeuralNetwork #ConstraintLogic #ConstraintSolver #LogicProgramming #Prolog #MiniKanren #Emacs #Lisp #Scheme #SchemeLang #ProgramSynthesis

