I thought so" meant I didn't expect to get any substantive, which turned out to be true.
I've been doing my own research, on my own, and more importantly, tasked by my employer, since the early LLMs. There's place where it can be somewhat useful, mostly for repetitive boilerplate type tasks, then there are claims being made, like yours, that I'd really like to see what's being done because it doesn't line up with my experience, and the experience of my cohorts. See my other post here for my opinion.
I could sit here and tell you all stupid crap I've gone through to try and get useful results from LLMs, but the end result is now, multiple times and at non-trivial expense, when the LLMs have been trained against our own codebase, it still doesn't create results that a junior dev can use effectively. So instead it creates half-wrong crap that I have to fix. Instead I could have written it myself, in freaking Notepad++, and it would have been more correct.
I don't have willful ignorance, I have disappointment and anger over over-hyped tech that fails to deliver. Will it get better, sure, but maybe? Or will this be the same stupid crap where a manager creates something in the LLM (like "code-free" tools of the past) that I end up either spending inordinate amounts of time fixing, or just do it myself.
'nough said.