Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
AI

Why Can't ChatGPT Tell Time? (theverge.com) 119

ChatGPT can browse the web, write code and analyze images, but ask it what time it is and you might get the correct answer, a confident wrong answer, or a polite refusal -- sometimes all three within minutes of each other.

The problem stems from how large language models work. These systems predict answers based on training data and don't receive constant real-time updates about things like time unless they specifically search the internet. AI robotics expert Yervant Kulbashian told The Verge that a language model "is only referencing things that have entered this space," comparing it to a castaway on an island stocked with books but no watch.

OpenAI can give ChatGPT access to system clocks, and does so through features like Search. But there are tradeoffs: every clock check consumes space in the model's context window, the finite portion of information it can hold at any given moment. Pasquale Minervini, a natural language processing researcher at the University of Edinburgh, said the leading models also struggle to read analog clock faces and have trouble with calendars.

Why Can't ChatGPT Tell Time?

Comments Filter:
  • Prove you're a human (Score:5, Interesting)

    by ZiggyZiggyZig ( 5490070 ) on Friday November 28, 2025 @07:42AM (#65822791)

    Maybe in the future we can use this test to differentiate between replicants and real humans...

  • That sucks.
  • I have been using both Grok and ChatGPT relatively extensively lately and in versatile scenarios... all I can say is there is merit in using them but only to a very limited degree.

    If you can put to them a task they're good at by design, hey great for you... but I find most tasks we want to give them are not that.

    • You're saying you're post was written by AI?

    • by caseih ( 160668 )

      I find the various LLMs are helpful as a form of search engine, enabling me to drill down to potentially useful information more quickly. However at the same time they are far worse than a search engine because they aren't able to actually give you the sources to check. When ChatGPT generates a chunk of code, if you ask it where it got it from, it will say it didn't get it from a specific site, it just knows this stuff. Which of course ends up wrong half the time. So you end up with wrong stuff confident

    • 90% of the difficulty in using AI tools is knowing when to use them.
  • In a constantly changing world where change is happening ever faster, ChatGPT is essentially a static model.

    • So, basically if you have a nail, a hammer is still better than a screwdriver to push it in the wall.

      That is to say, there are some things that LLM don't do well because, basically, they are just that: large language models. Returning information about curernt, timely events is not in their nature.

    • Re:So, basically (Score:5, Insightful)

      by timeOday ( 582209 ) on Friday November 28, 2025 @10:03AM (#65823007)
      Not really. If you ask it what's going on in the news then you'll get an up-to-date response, because it knows in that instance to check and summarize the news rather than just generating something from its LLM.

      And if you ask Gemini what time it is you'll get the right answer, for the same reason.

      The fact that ChatGPT fails to do this is a problem with ChatGPT, not any inherent problem for AI. Probably in response to this embarrassing article it will be fixed within a couple weeks.

    • Any modern AI model can be provided "tools" that it can use to perform various tasks or retrieve various information. The current date and time is easy to do. I can't say why the author and/or ChatGPT seems to have trouble but you can easily set up a tool to return the current date and time, instruct the AI "this will return the current date and time" and then if the user asks for it the AI will automatically leverage the tool. It's possible ChatGPT just has a lot of tools at its disposal and is getting con
  • by brunes69 ( 86786 ) <slashdot&keirstead,org> on Friday November 28, 2025 @07:55AM (#65822811)

    Humans are notoriously horrible at measuring the passage of time. Ever heard of the expression "a watched pot never boils" or "time flies when you're having fun"? Where do you think those expressions come from?

    If I took away your watch/phone and all external tools and asked you to be able to tell me what time it was a few hours from now, do you think you would succeed within even a 30 minute margin of error?

    • We aren't asking it to guess based on how tired it is. Just check its own system clock.
      • by brunes69 ( 86786 )

        An LLM doesn't have a "system clock" any more than your brain does. Do you know anything at all about how LLMs work?

        If an LLM made a decision to "check it's system clock" for the time, then it is no different than it calling out to a web service to do the same. It is a RAG-enabled external tool use. That is all it is.

      • Yeah, I don't understand how these models can "learn" from millions of scraped websites, but can't scrape some atomic clock website once a day to at least be relatively accurate with time.

    • What do humans have to do with super human intelligences? Are you saying that if humans are bad at telling the time, then super human intelligences should be excused for being worse than humans at that task? Did you miss the bit about "super"? It's from Latin, it means "above", "over", "beyond", "better".
      • by brunes69 ( 86786 ) <slashdot&keirstead,org> on Friday November 28, 2025 @08:21AM (#65822855)

        I am saying that the question is irrelevant, just like a lot of pontificating about AGI is.

        An LLM needing to use an external tool to check the time, is irrelevant as to how intelligent it is. Humans also need to use tools to check the time.

        It's a stupid thing to even discuss.

        • It's far from stupid to discuss. It's what you pay OpenAI and Anthropic and Microsoft for. It's why they have the valuations. If you claim the LLMs are subhuman tools then the bubble bursts. And your little 401k goes with it. For someone defending LLMs you appear to have a very contradictory understanding of the industry.
      • by allo ( 1728082 )

        Intelligence, Knowledge and Knowledge of current events (as simple as the current time) are three distinct parts.

        A super intelligence may solve millennium problems, but never have heard of the Eiffel tower. And with just its "brain" it cannot tell time, it needs to first get access to a clock. In the best case it could try to construct relative time by estimating, e.g., what it thinks how many tokens/second it generates and look back at how many it already generated (also not that simple as counting the own

        • Presumably, a super intelligence has ways of identifying the time that transcend the merely intelligent idea you've put forth?
          • by allo ( 1728082 )

            > Presumably, a super intelligence has ways
            You talk about superintelligence like some kind of hard to understand supernatural concept.

            If you build a system that can (hopefully) learn to become superintelligent, you know exactly which inputs you provide and which outputs you process. There is no "presumably it has ways". Your computer can't develop a way to print if you don't plug in a printer. A LLM can't develop a method to read the time, if you don't provide a time function. A robot can't learn to see

    • by wagnerer ( 53943 )

      It's a learned skill and can be precise to the minute over at least a day. Granted it's gone by the wayside since there are so many exact reference's available now. There's no need.

      I'm confident I could do 30 min margin of error over a few hours and I know many other much better than me. If I have an appointment in the morning I'll consistently wake up exactly 1 minute before my alarm no matter what time I go to bed.

      • Please mod up. My wife regularly gets up to check the oven for a 30+ minute recipe when the timer will start sounding the moment she's walking into the kitchen.

        For 4 years (2008 until 2012) when I had no strict job hours, I didn't use an alarm clock and would wake up in the desired time window around 7h30am and would even sleep late on the weekend.

        My kids sometimes try to guess the time, without cheating they often got closer than 15 minutes throughout the day.

        It seems the parent poster is just speaki

    • If I took away your watch/phone and all external tools and asked you to be able to tell me what time it was a few hours from now, do you think you would succeed within even a 30 minute margin of error?

      My cat knows what time my bedtime is and she can't read clocks. It is not behavior (my) based as she interrupts me when I am deep in the middle of something interesting.

      For myself, I am usually within a minute or two of the exact time whenever anyone asks me. There are times where I have no idea at all what time it is, but that is more rare than picking he correct time to the minute.

      I am not trying to take away the Truth of what you are claiming. I am just trying to help you understand that the Truth of wha

  • "read analog clock faces and have trouble with calendars" automated image pattern matching has been around for decades. And in the limited arena of clock faces and calendars seems very doable.
    • Re:Really? (Score:5, Insightful)

      by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Friday November 28, 2025 @08:45AM (#65822891) Homepage Journal

      automated image pattern matching has been around for decades

      The problem is that the LLM only does one trick. When you start integrating other software with it, the other software's input has to be fed in the same way as your other tokens. As the last paragraph of TFS says, "every clock check consumes space in the model's context window" and that's because it's just more data being fed in. But the model doesn't actually ever know what time it is, even for a second; the current time is just mixed into the stew and cooked with everything else. It doesn't have a concept of the current time because it doesn't have a concept of anything.

      You could have a traditional system interpreting the time, and checking the LLM's output to determine whether what it said made sense. But now that system has to be complicated enough to determine that, and since the LLM is capable of so much complexity of output it can never really be reliable either. You can check the LLM with another LLM, and that's better than not checking its output at all, but the output checking is subject to the same kinds of failures as the initial processing.

      So yeah, we can do that, but it won't eliminate the [class of] problem.

      • The problem is that the LLM only does one trick. When you start integrating other software with it, the other software's input has to be fed in the same way as your other tokens.

        Not really, you can have a pre-processor that text-matches (maybe with a regular expression) phrases like "what time is it?" or "wut tim" and responds with the appropriate time. Doing it that way actually saves processing power.

    • by PPH ( 736903 )

      ChatGPT is an idiot savant. It "knows" the contents of the entire web. But it can't figure out NTP?

  • by fluffernutter ( 1411889 ) on Friday November 28, 2025 @08:13AM (#65822849)
    Because AI cannot do anything it hasn't been programmed to do. When we enter the quit command into a regular program if it doesn't quit then that is called a bug, but when AI is doing it then it is trying to survive? No, people have designed a program that works on data that is so complex they don't even understand it, and they made the poor design choice to put the part they don't understand in front of the internal quit. It's still a bug, but people want to make it something more mystical than that. The complexity of it is basically being used to create plausible deniability that it is still a computer program.
    • For the record, maybe we shouldn’t call it a bug because they have rudimentary sentience and thinking ability. Maybe unproductive mutation would be more apt.
    • Because AI cannot do anything it hasn't been programmed to do.

      This is comically incorrect.

      • So I know copilot can execute Python code. Are you saying it can also execute Cobol and pl/I and APL and lisp?
        • So I know copilot can execute Python code. Are you saying it can also execute Cobol and pl/I and APL and lisp?

          AI is not executing anything. It is merely passing code to an interpreter which executes the code outside of the model.

          Random examples of things AIs are able to do without having been programmed to do them:

          - Language translation
          - base64 decoding
          - Solve simple ciphers
          - Adding fractions
          - Writing code in a variety of languages

          • So what I said. It can do some things that an LLM is good for but it's not going to compile your code for you or read the time off the system or otherwise interact with the world at all unless it is specifically designed to.
  • by unami ( 1042872 ) on Friday November 28, 2025 @08:23AM (#65822861)
    Same if you ask it about the next full moon - or really anything temporal subject about the present or future where there's a lot of training data from the past - as is to be expected.
    • Yes it will keep being LLMs are LLMs week until the outside world moves on from hammering "LLMs are magic" week every week.

      LLMs are of course LLMs.

    • Same if you ask it about the next full moon - or really anything temporal subject about the present or future where there's a lot of training data from the past - as is to be expected.

      I just asked Chat GPT "When is the next full moon?" and it did a web search to give me the correct answer for my location.

  • Try using this prompt for image generation: an analog watch showing 3.15pm time

  • by nategasser ( 224001 ) on Friday November 28, 2025 @08:25AM (#65822867)

    The ChatGPT (or any AI product) people actually use is more than just an LLM. People get wound up over the inherent limitations of LLM's as if that's some brick wall preventing these things from ever becoming useful.

    It's ludicrously simple to program a chatbot script to return the time instead of diving into the LLM if the user asks for the time. Same with mathematical operations - yes, LLM's sometimes get basic math wrong because that's not what they're designed for. But again, super simple to channel math requests to an engine that's built for that.

    If they haven't fixed a particular shortcoming yet it's because they're seeking more complete solutions than picking off one tiny complaint at a time. But these are trivial to fix and smarmy posts like this will not age well.

    • by burtosis ( 1124179 ) on Friday November 28, 2025 @09:19AM (#65822937)

      The ChatGPT (or any AI product) people actually use is more than just an LLM. People get wound up over the inherent limitations of LLM's as if that's some brick wall preventing these things from ever becoming useful.

      It's ludicrously simple to program a chatbot script to return the time instead of diving into the LLM if the user asks for the time. Same with mathematical operations - yes, LLM's sometimes get basic math wrong because that's not what they're designed for. But again, super simple to channel math requests to an engine that's built for that.

      If they haven't fixed a particular shortcoming yet it's because they're seeking more complete solutions than picking off one tiny complaint at a time. But these are trivial to fix and smarmy posts like this will not age well.

      The whole problem with this approach is a) we don’t have all the systems we need individually at a level of capability that meets the current level of hype and b) we don’t have a capable enough system management AI architecture, just using some weighted tokens is horribly deficient.

    • Yeah I have used Semantic Kernel to code AI in .NET and I did not give it the capability to tell the current date and time but it would be a 5 minute fix to do so since getting the current time is trivial. The bigger problem would be ensuring the offline server the AI runs on has its clock set correctly.
    • by PPH ( 736903 )

      I use a wristwatch.

    • by gweihir ( 88907 )

      No, that is not "super easy". The problem is the LLM needs to recognize it needs to hand-off a specific question. And that it cannot do reliably.

      • by allo ( 1728082 )

        Nah, a GPT-4+ level LLM can at least for the straightforward question "What time is it" reliable recognize that it should use the tool "{name: time, args: [], result: Current date and time}" or system prompt.

        Look at the unsloth prompt for mistral:
        > You are Mistral-Small-3.2-24B-Instruct-2506, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.\nYou power an AI assistant called Le Chat.\nYour knowledge base was last updated on 2023-10-01.\nThe current date is " + t

        • by gweihir ( 88907 )

          "Nah" and "can do it at least for a straightforward question" do not go together here.

          Incidentally, I was also commenting on the math aspect. You are obviously mentally deficient and only see the keywords that you want to see in a text.

          • by allo ( 1728082 )

            You rely on insults instead of arguments.

  • by michaelmalak ( 91262 ) <michael@michaelmalak.com> on Friday November 28, 2025 @08:37AM (#65822879) Homepage
    GPT5-Pro ($200/mo) answers with the correct time, even though it takes five minutes to come up with the answer.
  • I don't know how some people get these types of weird answers, I cannot get the damn thing to tell me the wrong time. The only really weird thing is that it is 'searching the web' while 'thinking'.

    • Asked the thing to tell me:

      I have heard that you sometimes give wrong time and sometimes even answer that you cannot tell the time, why does this happen to some people?
      ChatGPT said:

      You see different behavior because the system reads time from the userâ(TM)s environment. Some users block this data. Some browsers hide it. Some devices send inconsistent region info. This leads to gaps.

      You see steady answers because your browser shares stable time data.

      Other users trigger fallback behavior. In those cases I answer that I cannot read the time.

      is this it?

  • One of my common online searches is fur technical info about software bugs. Google's assistant is great at giving me results that apply to three OS versions ago.

  • Its 5 o'clock somewhere

  • by FudRucker ( 866063 ) on Friday November 28, 2025 @09:45AM (#65822975)
    When you show it how, AI should be renamed artificial ignorance https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpostimg.cc%2F30Tqjs4Z [postimg.cc]
  • by zawarski ( 1381571 ) on Friday November 28, 2025 @09:49AM (#65822983)
    Or six to four.
    • by PPH ( 736903 )

      Does anyone really know what time it is?
      Does anyone really care?

    • You should watch the "live at Tanglewood" version of that song. I am still blown away by how professional and well timed they are. Maybe not the best live performance ever, but it deserves to be voted on for that distinction. Computers could not have been more accurate or well timed.

  • Two years ago, on Slashdot, there was a post about how Microsoft was trying to get ChatGPT to control a robot:

    https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fhardware.slashdot.org%2Fstory%2F23%2F02%2F25%2F2330204%2Fmicrosoft-tests-chatgpts-ability-to-control-robots

    As far as I can tell, there's been little or no progress on this. That is, there is no (public) prototype of either a robot that's built using the same principles as ChatGPT or that ChatGPT can control one. Why not? Why doesn't the mathematical foundation of large language models translate to the p

    • Because ChatGPT and most AIs only work with digitally encoded data. The movement of an arm, like turning a dial, implies a translation from digital to real-world analog. Translating digital to interface with an analog world is really hard. Another example would be self-driving.
    • by Jeremi ( 14640 )

      There has been plenty of progress [interestin...eering.com] in using AI to control robotics; they use robotics-specific AIs for that, of course.

      The fact that ChatGPT (or even LLMs in general) isn't particularly useful for robots shouldn't be a surprise, since robots (other than maybe C3PO) are about physical manipulation of objects, not about language generation.

    • by gweihir ( 88907 )

      Do you know what happens if a robot-controller hallucinates? You end up with a broken robot, and maybe an injured or dead person on top of that. LLMs are not fit to control any physical actuators.

    • by allo ( 1728082 )

      Because a robot is better controlled using fine-grained servo commands than using english words. And if you use a word to servo mechanism (possibly a neural network) you should wonder why you start with a LLM instead of using the mechanism directly.

  • "Honest" Sam Altman here.

    Why can't LLMs tell time ?
    I don't know, but give me $100 billion dollars and I will blindly throw it at building datacenters and burning up the planet until AI can tell the time. Or I go bust.

    Which one do you think it will be ?
  • by BytePusher ( 209961 ) on Friday November 28, 2025 @10:13AM (#65823021) Homepage
    My experience is that you have to double check everything any model tells you. They don't "think" or understand. I find them most useful as a search engine when I'm not knowledgeable enough to know the specific terms I'm looking for. Also, they're good if you just want to generate text, but aren't too concerned with the specifics. With image generation, even nanobanana, they have never given me anything remotely usable as they have extremely strong attractors towards producing images that already exist. I typically am more interested in the subject of an image than the composition and they've proven incapable of creating a unique subject.
  • Article is conflating a lot of different things. I'm currently using chatgpt to write some code. New versions get timestamps. Chatgpt is able to obtain the current time via python calls, but the results are variably inaccurate, and it's unclear why. It seems to have to do with resource limitations in its sandbox.

  • It is simply not acceptable in this day and age to be served stale information. We live in very fast times, where information changes dramatically every second.

    If a new big story breaks out, or if my API version changes, I can't afford to wait for a model to take it sweet time to mercifully update its information.

    Cached data is only good for limited time, usually a very short time.

  • Question: What is the time?

    Answer: The time is an indication of the apparent position of the sun.

  • Time is relatively unequivocal, a database query based on a random number will never give you the same answer.
  • ...because it can't "tell" anything. It is a highly, highly advanced tool for statistical correlation algorithm. The only reason it appears to be anything more is because it appears to speak in a way that a human does by using our own language. Our brains are hard-wired for communication, so you could say that they're hard-wired to anthropomorphize anything that appears to communicate the same way we do... it facilitates social behavior. LLM's don't know anything, they're probably the most advanced thing hu

  • Do not mistake an impressive toy for a professional tool.

  • You can't write a mathematical function that calculates the current time. A function has an input and computes an output. If the input does not contain a hint on the current time, the output also cannot contain the current time.

    All tricks to get a model to tell the time depend on injecting it in another way.
    For LLM this includes:
    - A system prompt that contains "The current time is hh:mm"
    - A MCP for time information
    - A function-call for current time
    - Injecting the current time when generating the answer
    - Let

  • "I'm sorry Dave, but I can't do that"

  • A few years ago, even before LLMs, I was getting concerned that I might be getting phone calls from/to computers. I came up with "What day is it?" as a mini Turing Test question. It was out-of-band information that a human would know and wouldn't get confused by time zones, yet would be unlikely that someone would go out of their way to add to a phone chat bot. The expected answer from a human would be something like "Saturday", or "the tenth". A computer not programmed for this would get confused, and too

Today's scientific question is: What in the world is electricity? And where does it go after it leaves the toaster? -- Dave Barry, "What is Electricity?"

Working...