Censorship will be the end result.
What AI isnt trained on cannot be known.
It won't just be censorship.. It will ultimately be companies paying to get content created that causes AIs to say what the companies trying to sell stuff want to be heard.
Viewers/people who consume media will crave the answers to questions they are putting to the AI, and the AI companies are going to want to make sure their AI can give plausible-sounding answers to those questions that keep people using THEIR services.
The AI companies are now in the enviable position of being the de-facto news companies without officially being news companies. That means they get all the benefits news companies have with none of the liabilities. For example all the AIs come with disclaimers, so the AI company is (for now) not liable if the results they generate are entirely false.
There's still a possibility to a change in the legal climate, and the AI companies could come crashing down at some point.
For example: Suppose the supreme court rules the CDA Section 230(c)(1) protections Do not apply to AI-generated content. And the operator of a large public AI that answers questions in a Search engine format Has the same liability as publishers for any texts output by their AI which are not identical to content submitted by a human user of their service - and that Liability can include damages caused to anyone relying on misleading statements which are not cited, and therefore implicitly backed by the publisher or speaker.