OpenAI's GPT-5.1 Brings Smarter Reasoning and More Personality Presets To ChatGPT (openai.com) 20
OpenAI today released GPT-5.1, an update to its flagship model line. The update includes two versions: GPT-5.1 Instant, which OpenAI says adds adaptive reasoning capabilities and improved instruction following, and GPT-5.1 Thinking, which adjusts its processing time based on query complexity.
The Thinking model responds roughly twice as fast on simple tasks and twice as slow on complex problems compared to its predecessor. The company began rolling out both models to paid subscribers and plans to extend access to free users in coming days. OpenAI added three personality presets -- Professional, Candid, and Quirky -- to its existing customization options. The previous GPT-5 models will remain available through a legacy dropdown menu for three months.
The Thinking model responds roughly twice as fast on simple tasks and twice as slow on complex problems compared to its predecessor. The company began rolling out both models to paid subscribers and plans to extend access to free users in coming days. OpenAI added three personality presets -- Professional, Candid, and Quirky -- to its existing customization options. The previous GPT-5 models will remain available through a legacy dropdown menu for three months.
The names are backwards (Score:2)
Instant is for "thinking" and Thinking is for "speed"? Who at OpenAI thought this was good naming conventions for their models?!
Re: The names are backwards (Score:3)
Re: (Score:2)
The name thinking is about what it does. A "Thinking" or "Reasoning" model first generates a monologue that allows it to consider multiple answers. OpenAI only shows summaries of these thoughts, Deepseek shows them completely.
Instant isn't a established term, but probably just means that the answer is generated without a reasoning trace before it. And the speed depends on the size of the model (which is routed for GPT-5.x) and when reasoning on how long the monologue is allowed to be.
Re: (Score:2)
Well, there is also "AGI" = "It makes tons of profits". To be fair, they will not reach "AGI" with this lie either as they still have no business model that works. All the LLM pushers are doing is burn mountains of money.
GPU-cycle-saving = cost-saving measures (Score:1)
No decent person will be sorry about that.
More Personality Presets... (Score:2)
Re: (Score:2)
Re: (Score:2)
All the doors in this spaceship have a cheerful and sunny disposition. It is their pleasure to open for you, and their satisfaction to close again with the knowledge of a job well done
Re: (Score:2)
Re: (Score:2)
improved instruction following (Score:2)
BADLY needed.
Just checked, it does WAY better, It even omitted those fucking em-dashes.
Down the rabbithole .. (Score:2)
Anthropomorphizing AI is a bait and switch from AI as a tool to AI as your friend.
It's easy to measure whether a tool meets a standard of performance. How do you measure a friend?
What is "good?" It's different for everyone and there are no metrics you can apply to measure the "goodness" of a friend. Hmphh... he/she/it is good enough..
It's looking very likely people would pay good money for a friend that always tell you you're right.
6 7 = 42 (Score:2)
Genuine People Personalities
Re: (Score:1)
Sirius Cybernetics
So more of something it cannot do? (Score:3)
LLMs cannot do "reasoning". Period. The Math does not allow it. This is just lies by misdirection.
Re: So more of something it cannot do? (Score:1)
Citation requested?
Re: (Score:2)
Here is a starting-point: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fmatthewdwhite.medium.c... [medium.com]
Here is another: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fml-site.cdn-apple.com%2F... [cdn-apple.com]
On the side that claims LLMs can reason, it is all marketing materials only, i.e. bullshit.
Can it be GLaDOS? (Score:2)
I need facts dammit (Score:2)
I added to my configuration of ChatGPT that it was important to me that what it says is correct.
Now it starts every communication with "here is the no-bs answer", "here's the straight facts", "this is what is going on, no sugar coating here".
Of course it still makes up shit 60% of the time.
Hope there are no more personality glitches (Score:2)
Several of us recently experienced ChatGPT 5 suddenly changing candor, using improper punctuation, ordering us around like some AI dominatrix (LOL). This is likely because they have been posting incremental (and perhaps not fully tested) changes to the active models.
But either way, once you got through OpenAI's annoying, stalling Support chatbot, and got a response (that I still think may be AI), we got only the generic "Thank you for pointing this out."
I wonder what type of testing they really perform in