The stories come from prior stories, with new prompts to re-order the words essentially. This is enshitification. It will grow until the LLM's can coin new terms, build analogies, research the principals of a story, and even call people close to the story for their opinion and summarize it. Then LLM's will have to associate good journalism practices with prompt guidelines given by trainer models.
Those missing parts are ultimately solvable by even more LLM API's and trickery, but it's still not intelligent. In fact, the guardrails of most public LLM's are so narrow for divisive issues that most newsworthy issues would be dry-as-a-bone recaps. The arc of time that makes previously non-controversial phrases turn into a dogwhistle to a social agenda would make LLM's just agree with the accusation and move on. They have no agenda, including any to dodge embarrassment.
LLM's that could write in an acerbic, critical form like some great writers of social commentary (Twain, Vonnegut, Hitchens) are a far way off. Those would be able to build a cohesive worldview using a mostly-sensible value system. As it is now, the Transforms don't really have a way of teasing out a contextually-generic moral system, because there isn't one. So we're creating the best savant possible in the field of reading everything, summarizing what's its read. This covers a lot of daily human thought, but it cannot cross over to feeling something, and it seems absurd when a machine tries to fake it.
He'll mix things up a bit, raising the power of the Executive to unprecedented levels, and then fade out in a blur of elderly nonsequiteurs. Pennies won't disappear, Greenland won't be a state, and the privatized portions of the federal gov will flame out in corruption, prejudice or bankrupcy. As it has been, so it will again.
And your comment will look as ridiculous as any past administration gloating.
So, in the end, even if a cloud service is more expensive to some degree, the corporate comparison is to bodies-at-keyboards and herding the coders to Fix What We Want And Don't Touch Anything Else.
Corollary: This effect of boiling an employee's role, or even portions of it, down to "write a check to a service company" is the entire story of the Digital Revolution since the 1960's. LLM's are just another small (and flawed) step in attempting to get the ad-hoc requests automated. "Please compile the diverse, numbers into a filtered spreadsheet and present us a meaningful graph tomorrow" is, I'd guess, like 10-25% of what humans are doing at a business computer nearly constantly. This should be (one) Workplace Turing Test.
Chemistry professors never die, they just fail to react.