Comment Re:AI growth. (Score 1) 112
I personally can't relate to it helping write quality code, of about 5 functions I tried to use it for over the past little bit, it has gotten every single one of them wrong in some way, though admittedly in one case the wrong answer contained within it a clue about the existence and nature of a step in implementations that was omitted in the standards documentation. Maybe it's more helpful in other domains of programming, but in mine it's been pretty useless.
Where I find current-generation AI helpful in writing code is not in writing it so much as modifying it. It's especially helpful when you decide to make some change that requires updating dozens of lines of code over several files. Sometimes such changes can be performed by a simple search and replace, but often you have to examine and edit each one individually. It's tremendously helpful to be able to tell the LLM to go find all the places a change is required and make it. You still have to look at each edit performed by the AI, but nearly all of them are usually right, and this takes a fraction of the time.
Another way AI is useful to me is due to my particular context: Android (the OS, not an app). Android builds are slow. Even incremental builds that don't touch any "Android.bp" file (a Makefile, basically) take 2-3 minutes, minimum, because that's how long it takes the build system to determine that only the one file you touched needs to be rebuilt. Anyway, this creates a situation where the typical edit/compile/test cycle takes several minutes, most of which is just waiting for the machine. If you touch an Android.bp, it's more like 8 minutes. If you context switch during that wait, to read email or edit a doc or whatever, the context switch overhead begins to kick your butt.
But what I've found is that I can give the AI a task, like "Write unit tests for feature X", then start the build/test and context-switch away. The code the AI wrote won't compile or work, but that's fine (and also true of my own code, which rarely builds and executes perfectly the first time). When the build/test run is complete I don't mentally context-switch back into the coding task, I just copy-paste the output to the AI, which will make some changes (I don't bother looking at what), and I start the cycle again. After a half-dozen iterations of this (~20 minutes), the AI will have something that builds and passes, and then I actually switch back to see what it did and determine what needs to be improved. Usually I find some small tweaks that need to be made. Depending on their nature I either make them myself or tell the AI to do it.
This would be vastly better if the AI could run the build/test script itself and iterate to a working state without my input. I expect that will be possible soon -- and probably works for some environments now. But even as-is, the AI makes it so that while I don't actually produce the code any faster (maybe even a little slower), I'm more productive overall because I can take care of other things while the AI is working. The small interruptions to copy-paste output don't require a context switch.
I find that current LLMs are roughly equivalent to a smart but extremely inexperienced entry-level programmer who just happens to have thoroughly read and absorbed the language manual and all of the available APIs. If you use them the way you would a such a junior programmer, it works pretty well. You don't ask them to write the tricky code for you (or if you do you expect their work to need significant improvement) and you don't expect them to have a good sense of what good design or architecture are. But they can still be extremely useful.
And, of course, they're still getting better. Fast.