8/19/2025:
SAM ALTMAN: There’s a lot of things we could do that would grow faster, that would get more time in ChatGPT that we don’t do because we know that our long term incentive is to stay as aligned with our users as possible. But there’s a lot of short term stuff we could do that would really juice growth or revenue or whatever and be very misaligned with that long term goal. I’m proud of the company and how little we get distracted by that, but sometimes we do get tempted.
CLEO ABRAM: Are there specific examples that come to mind? Any decisions that you’ve made?
SAM ALTMAN: Well, we haven’t put a sexbot avatar in ChatGPT yet.
If we are in a simulation, who is to say that this simulation accurately reflects the real world that's running it? If you were an NPC in Minecraft and your perceptions and life experience was entirely contained within the game, you would think that that was perfect reality.
The way I see it, the natural result of a simulation theory is a cascading sequence of less real realities where the residents of each think that theirs is real and eventually create simpler, abstracted realities for their games... which eventually hit a sufficient level of complexity to generate their own even simpler, even more abstracted realities. None of those child realities are perfect (despite the perspectives of their residents) though, and so you end up with a sequence where we can predict that the simplest reality is one that cannot yet run child realities, but we don't have any meaningful way to speculate about what the other end of that spectrum might look like.
The answer to the question they imply is pretty clear on this one: the machine is flawed, but responsibility ultimately rests with the human operator whose entire job was to weed out that sort of flawed content. It doesn't matter if there was one article or a handful, any way about it, the human failed to manage the AI properly.
More interesting than "who's at fault" is the question of "why did the fault happen?" I suspect it's one of two situations. Either the human operator got lazy and stopped doing their job, possibly because the AI was so good that they grew complacent... or the human operator was completely overwhelmed by an incredible volume of AI generated BS and this stuff slipped through the cracks as they were busy eliminating the truly bizarre stuff. Either way, it says something interesting about the state of these AIs!
Programmers used to batch environments may find it hard to live without giant listings; we would find it hard to use them. -- D.M. Ritchie