Comment Re: Transform? Yes. (Score 1) 115
That reminds me of my O(N!) sort algorithm. (Really, it was a student of mine who proposed this, as a joke.)
(1) Randomize the array
(2) Is it sorted? If not, goto (1).
That reminds me of my O(N!) sort algorithm. (Really, it was a student of mine who proposed this, as a joke.)
(1) Randomize the array
(2) Is it sorted? If not, goto (1).
It's not clear to me that the people making the hiring/firing decisions, and deciding how many programmers can be replaced by AI, know the difference between the copy-paste coders you're talking about and the people who are doing the harder things.
And, given the way they think, and given the fact that all of us are subject to a whole host of cognitive biases, some places at least are likely to want to keep on the cheap copy-paste types than the more expensive senior programmers.
Short term, things will look good. Quarterly reports will be up. It will take longer for companies to realize that they've made a mistake and everything is going to shit, but because of the emphasis on quarterly returns, plus because all of these companies are caught up on the groupthink bandwagon of the AI evangilists, a lot of them as institutions may not be able to properly diagnose why things went to shit. (Even if individuals within the institutions do.)
I'm in science (astronomy) myself, and the push here is not quite as overwhelming as it is in the private sector. Still, I've seen people who should know better say "an AI can just do that more efficiently".
Of course, the fact that somebody talks about quantum physics without understanding it is not a predictor of whether or not somebody is retarted.
Talking about quantum physics without understanding it is a small but fairly universal part of our culture.
Somewhere early on in the video rental business back in the 80s, there was established a legal precedent that production companies couldn't forbid rental of anything they'd released on video. That carried over to DVD. Eventually we had Netflix DVD, which was superior to video rental stores because of its gigantic selection. Usually (though not always) what you wanted to rent was in stock. Yeah, there was a two day (or so) delay between deciding to watch something and getting to watch it, which we don't have with streaming. But one subscription got you pretty much everything.
Alas, the open renting thing did not transfer over to streaming, so now you have to subscribe to n different services to be able to get what you want on a whim -- undermining at least part of what streaming promised. And, stuff moves between services all the time. This is even before we talk about how crappy the discovery tools within one stream service is.
It was a sad day when Netflix DVD closed down.
I know what that would do to my morale.
The story "Q. U. R." had an inventor simplify robots which were going insane from having humanoid features they had no use for. 1943.
REVIEW: What would you do differently?
JOY: I wish we hadn't used all the keys on the keyboard. I think the interesting thing is that vi is really a mode-based editor. I think as mode-based editors go, it's pretty good. One of the good things about EMACS, though, is its programmability and the modelessness. Those are two ideas which never occurred to me.
I never had a Surface Studio. But I always wanted one for its 4500x3000 display. Microsoft did a good job in pushing 3:2 aspect ratio and driving the PC market away from the horrible letterboxing that dominated laptops and monitors for a decade. It's a pity that panel was never sold in a standalone monitor (Huawei talked about it but the product never reached the market).
Funny. On the several projects I've tried it for, Claude always offers to explain. And when I've needed that on a particular block, it does an excellent job. It does understand what it's doing, at least to some extent.
It doesn't understand anything.
It is trained to echo explanations, or at least things that sounds like explanations and may often accidentally be correct, based on all the various technical documentation you can find on the net.
^^^ This.
The whole terminology of "hallucinations" is misleading. It suggests some sort of not-functioning normally anomaly. But it's not. It's just LLMs behaving exactly the way they are designed, and not happening to give the right answer.
We have to remember that LLMs, as *designed*, are bullshit generators : https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Flink.springer.com%2Farti...
Just like college student papers, sometimes that bullshit is correct. If the students are really good at bullshitting, it's often correct. But that doesn't change the fact that it's bullshit, and that the goal is not to be correct, but to fool the person reading it into thinking you knew something that you didn't actually know.
"Hallucinations" are just those times when the LLM didn't, by chance, generate something true. It would take a fundamentally different system, designed to actually *understand* and not just try to replicate the style of what it's trained on, to be anything other than (perhaps often accidentally correct) bullshit.
Twenty or thirty years ago we started anticipating the AI singularity.
Today we see that instead we're going to get an AI crapularity.
(shove) you (shove) will (shove) take (shove) our (shove) ai (shove) technology (shove) and (shove) you (shove) will (shove) like (shove) it.
After all, what is good for Google is good for America, right?
"One day I woke up and discovered that I was in love with tripe." -- Tom Anderson