Comment Re:Reason (Score 1, Insightful) 80
There are no machines that can reason for any practical purpose.
Is finding, reporting, and fixing latent bugs in C or C++ code a practical purpose? Because they're doing a damn good job of that.
There are no machines that can reason for any practical purpose.
Is finding, reporting, and fixing latent bugs in C or C++ code a practical purpose? Because they're doing a damn good job of that.
AI 'reasoning' also means you can manipulate it.
If you have access to its command-input interface, either you own the system and are expected to be able to manipulate it, or you've somehow obtained unauthorized access, in which case it has a security problem, and it would be an equally serious problem for a non-AI system.
How, exactly, is a private household supposed to increase their energy usage in the summer? Mine Bitcoin? And how will using more energy reduce their bills? This just shows the unintended problem with solar: It needs to be coupled with lots of storage - not hours, but weeks.
You could mine Bitcoin, I suppose, but the obvious thing to do would be charge up your EV. Energy storage on wheels!
n/t
clearly copyrighted content
Is it, though? The recordings contain a lot of audience cheering, talking and laughter; miscellaneous background noise, and echos of the subject content -- the music. If the recording was made in a park, and a car randomly drove by with the radio turned up while a song was playing, would anyone argue the recording was copyrighted? Obviously, the intent of each of these recordings was to capture the music. But, the recordings were made by a private individual -- they're HIS recordings. I'd argue that if anyone owns a copyright on them, he does.
The LLM cannot "lie" to you. It's simply trying to predict the next word (or part of word/token). That's it.
This reminds me of the time in elementary school when my half-informed friend insisted that the only operation an Intel 8086 chip was capable of was adding 1 and 1 together. I'm pretty sure someone had tried to explain to him that at a fundamental level, CPUs are based on repeated applications of binary logic, but the lesson he took from that was that the Intel 8086 chip in particular was horribly crippled and could not do anything useful.
The "LLMs are just predicting the next word" meme is similar. It was largely true five years ago, and there's still a little bit of truth to it, but 2026-era AIs are much more complex and elaborate than that, in the same way that a 80486 is not "just a one-bit adder".
Nerd-snobbery is the funniest kind of snobbery
I'd be surprised if they weren't using AIs to review the explosion of AI-written code. Having AI #2 review the work of AI #1 seems like the obvious thing to do (if you're trying to avoid paying a human being to do it).
The code an AI writes is exactly the same as an seniour developer would write.
There is one critical difference -- after I've written the code, I'll be pretty familiar with how the code works, as a side effect of having designed it, written it, and tested it. If an AI wrote the code, then it's essentially third-party contractor's code as far as my familiarity with it goes; now I need to go through and read it line-by-line until I've convinced myself it's okay -- or I can cross my fingers and hope that the AI got everything right. Either way, it's my job and my reputation on the line if the code screws the pooch, not the AI's.
Do they hate programming so much?
Non-programmers definitely hate it (that's one reason they are non-programmers). The businessmen, of course, also hate paying programmers' salaries.
That's not an argument against the language, that's an argument against the people you don't trust. They could just as easily be nefarious with C++ or any other language they write code in.
AI can write code, but it's not clear that it will ever solve the problem of verifying that the code it wrote actually does what people want it to do, in all cases. For important tasks, who is going to want to trust a codebase that is difficult or impossible for a human to review? Will people just take the AIs' word for it that their air-traffic-control system software is correct and reliable?
I think there will continue to be demand for human-readable languages, if only for that reason.
Rust is a horrible idea from a security and privacy perspective.
Err, why? Rust's whole intent is to improve security by catching mistakes at compile-time that other languages wouldn't. What is it missing?
I think the answer will depend on whether people can actually see the difference or not. Will the picture actually look better (when placed next to the competition and showing the same image), or is there only an audiophile-style novelty/placebo effect? If the images look substantially the same, then people will probably just buy whichever set is cheaper.
>> Just convinces me it's pure profit taking.
> Nah, just the law of supply and demand. When demand exceeds supply, prices go up.
Seems to me you guys are both describing the same phenomenon using different words. A vendor who has more customers than product to sell them has a choice: he can either increase the price (and therefore increase his profits) or he can keep the price the same (leaving some money on the table, but potentially keeping his customers' loyalty). Most vendors will choose the former option at some point, because in the end they are in business to make money, but it's not legally required or inevitable that they do so.
A holding company is a thing where you hand an accomplice the goods while the policeman searches you.