If we are in a simulation, who is to say that this simulation accurately reflects the real world that's running it? If you were an NPC in Minecraft and your perceptions and life experience was entirely contained within the game, you would think that that was perfect reality.
The way I see it, the natural result of a simulation theory is a cascading sequence of less real realities where the residents of each think that theirs is real and eventually create simpler, abstracted realities for their games... which eventually hit a sufficient level of complexity to generate their own even simpler, even more abstracted realities. None of those child realities are perfect (despite the perspectives of their residents) though, and so you end up with a sequence where we can predict that the simplest reality is one that cannot yet run child realities, but we don't have any meaningful way to speculate about what the other end of that spectrum might look like.
The answer to the question they imply is pretty clear on this one: the machine is flawed, but responsibility ultimately rests with the human operator whose entire job was to weed out that sort of flawed content. It doesn't matter if there was one article or a handful, any way about it, the human failed to manage the AI properly.
More interesting than "who's at fault" is the question of "why did the fault happen?" I suspect it's one of two situations. Either the human operator got lazy and stopped doing their job, possibly because the AI was so good that they grew complacent... or the human operator was completely overwhelmed by an incredible volume of AI generated BS and this stuff slipped through the cracks as they were busy eliminating the truly bizarre stuff. Either way, it says something interesting about the state of these AIs!
Using TSO is like kicking a dead whale down the beach. -- S.C. Johnson