> so why doesn't AI just create the pull request
It does, but that's the problem.
Have a listen to this week's Security Now, they do a segment on exactly this with a Microsoft engineer 'arguing' with a chatbot on a .NET issue.
The chatbot identifies the filed problem as improper memory allocation in a regex parser (backtracking), but then instead of fixing the parser, it patches the parser to fail silently but without triggering the memory violation.
The engineer suggests that it should be fixed instead and the chatbot agrees but then makes unit tests that fail, forgets to turn on the unit test, etc.
Steve and Leo accurately describe its level of competence as that of a new intern.
Except with Nadella's new AI Vision the Microsoft employee's job has gone from that of a developer to one that suggests AI do a better job. Tending to Copilot pull requests instead of fixing problems as an engineer would.
But the problems don't throw a memory error anymore, so close it and move on. "Automated enshittification" was mentioned as the trajectory's outcome.