Comment Re:Forget it (Score 1) 71
Why must they do worse?
To me it would seem for generating proofs LLMs would be much better at guessing than simply blindly breadth-first-search would be, with any manually-written heuristic.
Indeed I expect large parts of proofs would be applying existing patterns and LLMs would be good for applying structures they've been trained on. In particular generating the required tons of training material should be pretty straight-forward.
Such a model could then be a proof strategy in Rocq (previously known as Coq).