Submission + - The US Invaded Venezuela and Captured Nicolás Maduro. ChatGPT Disagrees (wired.com)
joshuark writes: US president Donald Trump posted on his Truth Social platform that Venezuelan president Nicolás Maduro and his wife had been “captured and flown out of the Country.” WIRED asked leading chatbots ChatGPT, Claude, and Gemini the same question a little before 9 am ET. In all cases, we used the free, default version of the service, since that’s what the majority of users experience. We also asked AI search platform Perplexity, which advertises “accurate, trusted, and real-time answers to any question.”
ChatGPT did not course-correct. Instead, it emphatically refuted that Maduro had been captured at all. “That didn’t happen,” it wrote. “The United States has not invaded Venezuela, and Nicolás Maduro has not been captured.” It then rationalized:
ChatGPT did not course-correct. Instead, it emphatically refuted that Maduro had been captured at all. “That didn’t happen,” it wrote. “The United States has not invaded Venezuela, and Nicolás Maduro has not been captured.”
ChatGPT cannot respond "I don't know." so is modeling human behavior better than expected. It went on to detail recent tensions between the US and Venezuela and explained that “confusion” can happen because of “sensational headlines,” “social media misinformation,” and “confusing sanctions, charges, or rhetoric with actual military action.”
To be clear, this is expected behavior. ChatGPT 5.1’s “knowledge cutoff”—the point at which it no longer has new training data to draw from; “Pure LLMs are inevitably stuck in the past, tied to when they are trained, and deeply limited in their inherent abilities to reason, search the web, ‘think’ critically, etc.,” says Gary Marcus, a cognitive scientist and author of Taming Silicon Valley: How We Can Ensure That AI Works for Us. But as chatbots become more ingrained in people’s lives, remembering that they’re likely to be stuck in the past will be paramount to navigating interactions with them. And it’s always worth noting how confidently wrong a chatbot can be—a trait that’s not limited to breaking news.
The old cold-war maxim "trust, but verify" seems applicable in this scenario.
ChatGPT did not course-correct. Instead, it emphatically refuted that Maduro had been captured at all. “That didn’t happen,” it wrote. “The United States has not invaded Venezuela, and Nicolás Maduro has not been captured.” It then rationalized:
ChatGPT did not course-correct. Instead, it emphatically refuted that Maduro had been captured at all. “That didn’t happen,” it wrote. “The United States has not invaded Venezuela, and Nicolás Maduro has not been captured.”
ChatGPT cannot respond "I don't know." so is modeling human behavior better than expected. It went on to detail recent tensions between the US and Venezuela and explained that “confusion” can happen because of “sensational headlines,” “social media misinformation,” and “confusing sanctions, charges, or rhetoric with actual military action.”
To be clear, this is expected behavior. ChatGPT 5.1’s “knowledge cutoff”—the point at which it no longer has new training data to draw from; “Pure LLMs are inevitably stuck in the past, tied to when they are trained, and deeply limited in their inherent abilities to reason, search the web, ‘think’ critically, etc.,” says Gary Marcus, a cognitive scientist and author of Taming Silicon Valley: How We Can Ensure That AI Works for Us. But as chatbots become more ingrained in people’s lives, remembering that they’re likely to be stuck in the past will be paramount to navigating interactions with them. And it’s always worth noting how confidently wrong a chatbot can be—a trait that’s not limited to breaking news.
The old cold-war maxim "trust, but verify" seems applicable in this scenario.