The stories come from prior stories, with new prompts to re-order the words essentially. This is enshitification. It will grow until the LLM's can coin new terms, build analogies, research the principals of a story, and even call people close to the story for their opinion and summarize it. Then LLM's will have to associate good journalism practices with prompt guidelines given by trainer models.
Those missing parts are ultimately solvable by even more LLM API's and trickery, but it's still not intelligent. In fact, the guardrails of most public LLM's are so narrow for divisive issues that most newsworthy issues would be dry-as-a-bone recaps. The arc of time that makes previously non-controversial phrases turn into a dogwhistle to a social agenda would make LLM's just agree with the accusation and move on. They have no agenda, including any to dodge embarrassment.
LLM's that could write in an acerbic, critical form like some great writers of social commentary (Twain, Vonnegut, Hitchens) are a far way off. Those would be able to build a cohesive worldview using a mostly-sensible value system. As it is now, the Transforms don't really have a way of teasing out a contextually-generic moral system, because there isn't one. So we're creating the best savant possible in the field of reading everything, summarizing what's its read. This covers a lot of daily human thought, but it cannot cross over to feeling something, and it seems absurd when a machine tries to fake it.