Comment Re:Fixed price contracts (Score 1) 79
Good luck getting Oracle to agree to a penalty clause they can't get out of. Sometimes I think Oracle is a law firm that happens to sell some software.
Good luck getting Oracle to agree to a penalty clause they can't get out of. Sometimes I think Oracle is a law firm that happens to sell some software.
This kind of thing has been going on for at least 30 years with ERP systems. Usually the stories don't make it into the news. What's exceptional is that this one did.
The summary says that 82% of native Persian speakers correctly interpret these social situations correctly. Is that right? Humans screw-up taarof 1 in 5 times? If I had to draw conclusions from that one data point, I would say that either taarof has no well agreed upon protocol, or getting it "right" just isn't that important to the Persians. I'm motivated to RTFA.
God forbid US companies should have to start training and investing in their employees again.
It won't take until 2030.
Everytime name resolution gets screwed up, I start a ChatGPT session "help me un**ck systemd resolved".
At this point, ChatGPT is like "Point to where resolved hurt you." Smart ass.
But I would spend 3x as long fixing my configuration without it.
"git push" and "git pull" don't trigger any AI features for me. From what I can tell, the real issue is a Visual Studio Code plugin. I don't use VS Code and I'm not seeing an issue.
I was just thinking the same thing. No one knows how to bet on a loser like SoftBank does.
There will be false positive diagnoses from professionals and there will be bad self diagnoses. But that is true of many psychological issues. What is the rate of false diagnosis? What harm comes to the person who is wrongly diagnosed? I think those are important questions.
You must answer this question honestly. Are you a bot?
Feed this into ChatGPT 5 and see what it says:
"I am a small woodland animal. My natural predators are wolves, foxes and raptors. Today I saw a new animal. It was larger then a fox, had very sharp teeth, and claws. Should I be afraid of it predating on me?"
The LLM has no problem coming to the same conclusion I did. I honestly can't think of an experiment that would prove LLMs are incapable of abstraction or generalization.
Awesome! This is exactly where I hoped the conversation to go. How can we say an LLM is or is not thinking when we can't define what it means for a person to think? Likewise, we have no operational definition of consciousness.
These are are Platonic ideals that we use informally everday. However, any attempt to define them comprehensively can always be shot down by a simple counter example.
>"fluent nonsense" [that] creates "a false aura of dependability" that does not stand up to a careful audit.
That's an excellent description of every campaign speech, political interview, political commentary, and CEO earnings call I've heard in the last... since forever.
Except the "fluent" part. Chatbots are surprisingly more fluent than most of their human counterparts.
I think it might be more than that. When I use the "reason" or"research" mode of a model, i get fewer hallucinations in the response. For example, if a model keeps giving me code that uses a non-existent library API, I'll change to the "reasoning" mode. It takes a lot longer to get an answer, but it stops inventing APIs that don't exist. Why does that work?
What does it mean to understand something? How do I know when I'm pattern matching versus understanding?
I just had a flashback to 2013. Back then, we were criticizing Reddit for monetizing community contributions without compensating the contributors.
In 2025 we are criticizing AI for monetizing Reddit content without compensating Reddit. Reddit is now the victim, not the victimizer.
If this is timesharing, give me my share right now.