Comment Re:The Dark Ages (Score 1) 192
however they make that profit through saving lives.
LOL! +5 Funny!
however they make that profit through saving lives.
LOL! +5 Funny!
But
For example, if creating the spreadsheet would take 3 hours, but using AI it only took 15 minutes plus an additional hour to correct errors, that is still a net saving of almost 2 hours.
Here in reality, this is how it would go: It would take the user about an hour to get the AI to make something resembling the spreadsheet they want, 2 hours to give up trying to make all the necessary corrections, and 3 hours to just do the damn thing themselves. That's a net loss of 2 hours.
Now, I will grant that LLMs can have a multiplicative effect in an organization. For example: you use AI to quickly make a spreadsheet which you pass that along to a coworker or subordinate to 'check for errors and make corrections'. The coworker, seeing the mess you handed them, uses AI to make the 'corrections' you asked for and sends it back. You glance at it and see some changes. Confident that your coworker actually did their job, you pass it along to whoever actually needs the spreadsheet. The recipient tries to make a report based on the nonsense in the spreadsheet, gives up and asks AI to make the reports for them until they get the numbers they expect. (prompt: "fix the data in the spreadsheet and produce a TPS report. do not introduce gnu miss steaks.") Their supervisor then acts on the data in the report, pleased that their department was able to effectively leverage AI to increase productivity.
The doomers are right, just for the wrong reasons.
Accelerants? Nonsense. I've found that while users often report increased productivity, this generally isn't supported by objective measures. AI use among actually competent users tend to drop off over time as they notice that they avoid doing tasks with "assistance" when pressed for time.
If CEO's report bullshit to investors, they're committing a crime. It's not like when they murder or poison people either -- these crimes are actually prosecuted. Money and position won't protect you when your victims are other wealthy and powerful people.
Thanks to NFTs, people can now use fake money to pretend that they're buying fake art!
That is an absolutely terrible application for an LLM. There are, and have been, alternative approaches that are far more accurate, predictable, reliable, and less expensive. If you're using an LLM for that now, you are, without question, paying significantly more for worse results.
Think scan a packing slip or order acknowledgement pdf and import it.
When you say "scan a pdf"... there's at least a 60% chance that you, or someone at your organization, regularly prints out PDFs and runs them through a scanner because scanning is one of the steps in the data entry process they were taught.
Also it isn't really ai yet. AI is coming
More nonsense. AI is, and always has been, a very broad term that covers quite a few things that you wouldn't classify as AI. What AI is not, and has never been, is whatever science fiction absurdity you've decided is "really ai".
It's not like the Amish were like "the 15th century was so cool! Let's screw all this modern shit and party like it's 1693 forever!" They were never really looking backward, they just sort of stopped moving forward
I'd agree, but experience has shown that most developers just 'code', building increasingly absurd "architectures" to justify larger teams while delivering surprisingly little.
An anecdote, as told to me by a member of his organizations technology steering committee: A request to add a new payment method to their website was finally rejected after seven months when their web team (six developers and a non-technical 'scrum master') determined that integration was not possible (after 6 months!) claiming that they lacked the necessary expertise
What you're describing is nice, but I can't recall a single example I've seen of it outside of smaller teams or one-man efforts at small businesses. Though even in that case, you're more likely to find some prima donna with too little domain knowledge and too much autonomy forcing their own "system designs" on users without even a rudimentary understanding of existing processes and their purpose.
The popularity of Agile has lead to more than just inflated code-bases, but inflated team sizes as well, further isolating of developers from users. The average 'coder' can't do much more than focus on writing code and, honestly, probably doesn't care. Attitudes towards employment in general have also changed. The average 'coder' today is here to tick the boxes, maximize whatever metrics make them look valuable, and build their resume for their next job change. They don't care about the code or the company because they don't plan on staying anyway.
"AI made us more productive" has become the perfect excuse for the layoffs and hiring freezes that were going to happen anyway, given the state of the economy. I wonder how long people will believe it?
I almost can't believe I used to worry so much about efficiency
That attitude is one of the primary reasons why software is so bad these days. It's not even about optimization, you can get significantly better performance by just not doing stupid things.
The point, which you seem to be actively avoiding, is that capping credit card interest rates would make it more difficult for people to get credit cards, which would make it more difficult for them to build credit.
Yes, banks want that revenue. That's why they'll actively oppose this half-baked nonsense. Remember: they invented the credit score system for their benefit, not yours. Corporations are not your friend and predatory lending is big business.
wouldn't having cards cancelled and/or not issued be a good thing? [...] What's the downside?
Losing a credit card will hurt your score, not help it.
Remember that the amount of credit you have available, how long you've had it, and how much you're using are important factors when determining your credit score. Having a bunch of old cards is actually good for your credit.
Making it difficult for people to get credit cards will also make it more difficult for them to build and maintain credit.
Let's say we live in a fantasy land where LLMs are magically 95% accurate. Would you trust a car that only worked 95% of the time? What about brakes that only stopped your car 95% of the time?
What about legal advice? Would you hire a lawyer that would make up silly nonsense 5% of the time?
Sorry, kid. LLMs just aren't the science fiction fantasy that you want them to be. Your AI girlfriend does not and can not love you. You're not going to have a robot slave. Whatever nonsense it is that you're hoping for isn't going to happen. Not with the technology we have today.
Human mistakes are of an entirely different nature and quality than AI 'mistakes'. A human won't accidentally make up facts, cases, or sources. A human won't write summaries of things that don't exist. A human won't accidentally directly contradict a source while citing it. A human is also actually capable of identifying and correcting mistakes, unlike an LLM. Stop with this absurd nonsense that it's okay for LLMs to "make mistakes" because humans also "make mistakes" These things are not the same and you know it.
As for this 100% business, with AI, you'd be lucky to get 60% accuracy. A human with that kind of track record offering legal advice would be arrested.
Don't get suckered in by the comments -- they can be terribly misleading. Debug only code. -- Dave Storer