Comment Re:AI can also FIX t (Score 1) 92
Security isn't convenient.
Security isn't easy, but it isn't hard either.
Assume you're' a target (because you are) and make it so that you're hardened. Don't be the easy target. Criminals are lazy.
Security isn't convenient.
Security isn't easy, but it isn't hard either.
Assume you're' a target (because you are) and make it so that you're hardened. Don't be the easy target. Criminals are lazy.
Instead of fearing AI, use it to secure software and make it better.
We have nothing to fear but fear itself.
Yeah, the privacy aspect of EFF doesn't align with social media in general IMHO.
Are they trying to "earn" some $ from social media? That's a silly goal, but might be a nice side benefit.
It's worse than that, because with proper skill, it isn't even a copy/pasta. It is one app that posts to everything all at once. Even the social media places that didn't make the list.
Buffer, Hootsuite, Metricool, Robopost or Later
One could probably tweak posts for each platform with AI effectively.
Here is the list they are staying with
Bluesky, Mastodon, LinkedIn, Instagram, TikTok, Facebook, YouTube
So, where did the audience go? It didn't go to the existing places from 20-8 years ago. And I doubt it went to the two new kids.
What this tells me is that their audience is aging/dying off, and the younger generations aren't there in numbers. This requires little to no political inferences to understand. It is easy to mistake one for the other.
Yes, I am a Boomer. I don't rely upon AI to tell me what to think. I am also a Libertarian and interested in Privacy and been a long time proponent of Open Source. Maybe figure out what intersections to the younger generations align and go there.
Now lets bring these requirements into law, permanently, across all industrial and consumer devices.
Any obstacle to repair and maintenance other than the inherent difficulty of the operation is anticonsumerist and in the long run, economically damaging (and many of the inherent difficulties are as well, but we gotta start somewhere).
If we change the "right to repair" laws, we should also change the liability laws. If a home-repaired unit becomes unsafe and injures people, who is responsible?
In the case of farming equipment, suppose a farmer makes a repair to a piece of equipment and then his son is injured or killed by said equipment. Who is liable?
The company would say that the farmer took full responsibility once he modified the equipment, while the farmer could say that his modifications did not affect the safety of the device.
It's also not at all clear whether a physical repair done by the farmer could have contributed to an accident made by software. Lots of things can affect software, such as the alignment of the two welded pieces. The software makes a performance analysis of stopping distance based on information it has, but the repair might have changed those parameters.
People who like to race want to download new parameters into the ECU of their car, but that's illegal. It actually is: the parameters are set to maximize efficiency, and while you can get better performance with different numbers, it would promote climate change, so it was made illegal.
Being able to repair things is good, and it's very clear that open source has driven the software industry forward, but we need to be careful about liability as well. Jailbreaking your phone is one thing, but jailbreaking your EV might have catastriphic consequences. I'm not a fan of ID-tagging headlights (BMW, Mazda), but if an accident occurs because of reduced visibility the company could be held liable.
I'm completely in favor of being able to repair things, and John Deere is the worst sort of predatory behaviour, but just wanted to point out that there's another side to the story and we should be careful.
I think what is really going on is that is not 'fluid IQ', but regular, normal "IQ".
"Fluid" intelligence is the ability to think, reason, solve problems, and learn things. "Crystallized" intelligence is your amassed knowledge.
These are technical terms used in the literature.
Intelligence is nature's guess as to how complex your environment will be... but there's an out. People with low fluid intelligence have to work harder to understand things, but if they put in the work they can amass a body of knowledge that rivals that of people with high fluid intelligence.
And of course, lots of people with high intelligence stop learning in their mid twenties. At that point they've conquered their environment and are living successful lives (good job, married, kids &c) so there's no real reason to push themselves. Lots and lots of people, even smart people, haven't read a single book in the last year - and this observation was true in the 1970's before the internet.
(And nowadays this is probably more accurate due to the appalling quality of information found on the internet.)
That is, stupid people either do not realize the AI is wrong, or more likely, they are so used to being corrected by more intelligent people that they just assume the AI must be smarter than they are and do not challenge it.
It's a question of training. We're evolved to believe what people say, it's a way of reducing the cognitive load of learning things (by believing what someone else has already figured out). We're not used to questioning the logic of someone else's beliefs.
As an example of this, note that Warren Buffet has built a career on identifying fallacies in business, google "Warren Buffet fallacies" for a list.
None of these fallacies is taught in school, everyone has to find them and figure them out on their own. And then you have to use them in your daily lives.
Almost no one is used to doing that, which leads to the current problems with AI.
Unlike reusable rockets, EVs, and full self driving...
Yeah, but other than that, what has Elon Musk ever done for us?
To add to the parent post, the paper appears to be the first step in the scientific method: "Notice a trend".
The next steps will be "form a hypothesis", "construct a test to confirm or deny the hypothesis", "perform the test"... and so on.
In this specific case, "perform the test" might be impossible to do for ethical reasons - you can't take people at random and sit them down in front of a LLM and test their level of psychosis before and after, because of that pesky "do no harm" rule.
But we might be able to find people who have had their psychosis levels measured before LLMs became available, and whose LLM accounts will accurately show how much LLM usage they have, and we can then remeasure their levels of psychosis and see if this correlates with LLM account usage.
Or some other test like that.
The paper appears to be an attempt to raise the issue and start a conversation. From the abstract:
[...] but there is a growing concern that these agents could reinforce epistemic instability and blur reality boundaries. In this Personal View, we outline the emerging risks, possible mechanisms of delusion co-creation, and safeguarding strategies for agential AI for people with psychotic disorders. We propose a framework of AI-informed care, involving personalised instruction protocols, reflective check-ins, digital advance statements, and escalation safeguards to support epistemic security in vulnerable users.
From the parent post:
One thing I can tell you, my mother was heavily affected by television.
I'm also heavily influenced by TV, and have spent a lot of time trying to sort out beliefs that come from TV from beliefs that come from experience or research.
I'm constantly presented with a situation or belief and have to pause to reflect and say "I believe that because it was on TV, it's probably not real". Many of my opinions on the police, government agencies, other countries, world events, and social constructs come not from experience, but on how they were portrayed on TV.
We're hard-wired to believe what people tell us, it's a cognitive shortcut in an environment where you can't know anything, but lots and lots of what we think today are only dramatic choices intended to provoke emotional response. (Compare with news reporting today. On both sides.)
For example, I've met people who won't go hiking because of all the bugs, skunks, poison ivy, and bears.
Assuming that LLMs are content neutral, I think in 10 years or so we're going to find people whose worldview is a greatly amplified version of random events that were highlighted when they were kids.
Using AI is training AI to replace you. If you can be replaced with AI, you will be, and should be.
If you don't use AI, your peers will be, and you will be replaced by AI anyway.
Good Luck
Lying to yourself is the biggest danger for trying to stay Anonymous. With enough patterns to recognize, the idea that one can hide is a delusional take.
The only way to win, is to run EVERYTHING you post through an AI that changes the tone and words used in all your online activity. But even then that may itself be a lie.
It's Not Enough to "Caution Against".
That language is TOO PASSIVE.
The correct and appropriate phrase is "HELL NO" and start tossing out 1st, 4th, 5th amendment claims IN COURT.
What about Altman making "Open" AI closed-source and for-profit years ago didn't tell you he was a dirty, money-grubbing cunt ?
Bring on the bankruptcy !
LLAMA was [illegally] released into the public three years ago (to the day - March 3, 2023), and it's estimated that ten years of AI improvements happened in the subsequent 6 months. People were doing all sorts of things with LLMs that meta hadn't thought of, or didn't have time to develop. Such as text-to-audio, local LLM use, and automated manuscript generation.
All these attempts at monetizing the LLMs are, at the same time, holding back the progress of AI development. If OpenAI wants to leap ahead of the competition, they should put their language model online and see what the community comes up with.
I get it - training a LLM takes roughly $100 million for the initial dataset, and companies need to recoup this expense.
Still, I'm saddened that I can only use the system for purposes that the company approves of, and in ways that they have already thought of.
There's a lot of potential there, and we're not making good use of that.
I read that and was simultaneously laughing and angry. I'd call it a load of horseshit, but that would be insulting to horseshit.
What a bunch of windbaggery. Meaningless, feckless corporate speak.
We know. They know we know. We know they know we know. They don't care.
Nothing says "fuck you" like a "well worded" press release. It was only missing the AI EM-DASH.
Adding manpower to a late software project makes it later. -- F. Brooks, "The Mythical Man-Month"