I use AI every day. But despite repeatedly being told how wonderful the latest models now are, and seeing stories of people ( usually with their own AI tools to sell ) who claim not to write code any more, even moderately complex problems seem beyond the AI. While, worse, the AIs keep claiming to understand problems which they clearly do not.
"These systems are powerful and continue to get more powerful."
We are not seeing this. I think LLM usefulness for coding has plateaued and feeding them more data and adding more parameters is not translating to the real-world gains that the AI companies promote, and have to promote because otherwise they will go bust.
More and more users are calling them out ( and even claiming older models were better ) and local, free, open source models are improving faster than Big AI.
AIs are useful. But the AI crash is coming.