Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Submission + - Elon Musk Goes Nuclear (theatlantic.com) 4

sinij writes:

The world's richest man and the president of the United States are now openly fighting.

Trump threatened to cancel Space X government contracts and Musk accused Trump to be a frequent flyer to the Pedophile Island. This would be highly entertaining if not for the potential to wreck companies, ruin the economy, and sabotage legislative agenda.

Comment Re:Some background would be helpful (Score 1) 33

Well, under some conditions an unique movie car *would* be copyrightable. The case where the car is effectively a character is just one of the ways you can argue a car to be copyrightable.

Copyright is supposed to protect original creative expression, not ideas or functional items, which may be protected by *other* forms of intellectual property like trademark or patents. This is because copyright protects *creative expression*. It doesn't protect ideas, or functional items. A car is a functional item, so *normally* it isn't protected. But to the degree a car in your movie has unique expressive elements that are distinct from its function, those elements can be copyrighted.

But the plaintiff still wanted to claim that he owned the design of the car, so his lawyer looked for a precedent that established that cars can sometimes be copyrighted even though they are functional items, and he found the Batmobile case, where the Batmobile was ruled to be a prop that was *also* a character. Because he cited this case, the judge had to rule whether the Batmobile ruling's reasoning applied to this car, and he decided it didn't. The car may be unique and iconic, but that's not enough to make it a character.

Comment Re:If AI were an employee (Score 1) 23

Sadly, based on experience I think you are wrong. Employees who screw up are often not fired, or are replaced with employees just as bad.

There's a reason there's a common saying that "You pay peanuts, you get monkeys." It's because it's very common for employers to accept mediocre or even poor work if the employees doing it are cheap enough. I'm not anti AI -- not even generative AI. I think with AI's ability to process and access huge volumes of data, it has tremendous potential in the right hands. But generative AI in particular has an irresistible appeal to a managerial culture that prefers mediocrity when it's cheap enough.

Instead of hiring someone with expensive thinking skills to use AI tools effectively and safely, you can just have your team of monkeys run an AI chat bot. Or you can fire the whole team and be the monkey yourself. The salary savings are concrete and immediate; the quality risks and costs seem more abstract because they haven't happened yet. Now as a manager it's your job to guide the company to a successful future, but remember you're probably mediocre at your job. Most people are.

According to economics, employers stop adding employees when the marginal productivity of the next employee drops to zero. What this means is that AI *should* create an enormous demand for people with advanced intellectual skills. But it won't because managers don't behave like they do in neat abstract economic models. What it will do is eliminate a lot of jobs where management neither desires nor rewards performance, because they don't want anything a human mind can, at this point, uniquely provide.

Comment Doctor Who Cares ? (Score 1) 77

The show fell off a cliff with Jodie Whittaker and not at all because of her. The first three or so episodes I watched she put on a reasonably good performance. But the material they gave her to work with was just atrocious. Utter crap. Stuff they must've dug out of the very bottom of the "rejected ideas" bin.

The ensemble cast didn't work, like at all. I never cared for any of them even the tiniest bit. The Doctor, the most feared creature in the universe, a being able to rip reality apart and put it back together, someone who can start or end wars with a few words. The Doctor who literally said to the Aliens of the universe assembled above Earth as he announced he'll stand in their way and he has neither a plan nor any weapons, to "do the smart thing. Let somebody else try first." - and they all decided to fuck off instead.

So THAT Doctor suddenly became a bumbling idiot who succeeded only through luck and plot convenience.

So maybe going back to Rose is a chance of a restart. After all, she _was_ Bad Wolf. Though I fear they'll just cheap out with some "oh, I just picked a familiar face at random" bullshit.

Comment Nah (Score 1) 105

I wish, but nah, this is pure SciFi.

Why? Because it's not all in the brain. The brain is connected to the entire nervous system. The "mind-body duality" doesn't exist. You're not a mind that has a body, you're a body that has a mind. We know that the body can survive without the mind (coma patients, some extreme cases of mental or debilitating illness, etc.) - but there isn't one case of a mind without a body.

Even if you could upload yourself to a supercomputer with the same processing power as your brain, I'm pretty sure the first dozens or hundreds of such experiments will go the SpaceX Starship way - lots of fireworks for every tiny bit of ground gained.

I personally think that we should do work on replicating less complex parts of the nervous system first. One, we'll need it if we want to do full mind digitalisation. Two, it can help people today (amputees, etc.). Three, there is already some work with great progress going on.

Comment never (Score 4, Funny) 99

self-governing platform where high-reputation users gained moderation powers

Yeah. Never, ever, do that. I've run a few online communities. Back when your own forum was still a thing and you could survive without being a group on Facebook, a subreddit or a Stackoverflow.

Your most active users aren't always your best users, and they almost always are NOT the ones you want to have as moderators.

If I could do all that again, I would give mod rights to the people who contribute just a bit, but consistently over a long time, and who read more than they write.

Submission + - Stack Overflow's Radical New Plan to Fight AI-Induced Death Spiral (thenewstack.io)

DevNull127 writes: Stack Overflow will test paying experts to answer questions. That's one of many radical experiments they're now trying to stave off an AI-induced death spiral. Questions and answers to the site have plummeted more than 90% since April of 2020. So here's what Stack Overflow will try next.

— They're bringing back Chat, according to their CEO (to foster "even more connections between our community members" in "an increasingly AI-driven world").

— They're building a "new Stack Overflow" meant to feel like a personalized portal. "It might collect videos, blogs, Q&A, war stories, jokes, educational materials, jobs... and fold them together into one personalized destination."

— They're proposing areas more open to discussion, described as "more flexible Stack Exchanges... where users can explore ideas or share opinions."

— They're also licensing Stack Overflow content to AI companies for training their models.

— Again, they will test paying experts to answer questions.

Comment Re:Duh! (Score 1) 68

I think we should make a distinction between "AI" and "AGI" here. Human intelligence consists of a number of disparate faculties -- spatial reasoning, sensory perception, social perception, analogical reasoning, metacognition etc. -- which are orchestrated by consciousness and executive function.

Natural intelligence is like a massive toolbox of cognitive capabilities useful for survival that evolution has assembled over the six hundred million years since neurons evolved. The upshot is you can reason your way to the conclusion that you can remove the blue block from underneath the red block without disturbing the red block, but then metacognition uses *other* mental faculties will overrule this faulty conclusion. Right there you have one big gap between something like an LLM and natural intelligence (or AGI). LLMs have just one string to their bow, so they can't tell whether they're "hallucinating" because detecting that requires a different paradigm. The narrow cognitive capabilities of generative AI means it requires and engaged human operator to use safely.

For many decades now I've heard AI advocates claim that the best way to study natural intelligence is to try to reproduce it. I always thought that was baloney: the best way to study natural intelligence is to examine and experiment with animals that possess it. What AI researchers try to do is to write programs which perform tasks that previously could only be done by humans. That's why when any AI technique starts to work, it's not AI anymore. But these tasks are so restricted, and the approach taken is so uniformed by actual psychological research, that I doubt that these limited successes tell us anything about natural intelligence.

Until, maybe, now. Now that AI has reached a point of unprecedented impressiveness, I think it teaches us that reductive approach to AI we've been taking won't generate systems that can be trusted without human oversight. That doesn't mean these systems aren't "AI" by the field's own Turing test.

Comment Re:Yeah, no shit, Sherlock. (Score 3, Insightful) 57

Anyone who has studied the Earth's climate knows we are in the bottom 90% of temperatures over time, and we are exiting an interglacial, and that the earth is unstable at this cold temperature.

A bit of an exaggeration, but let's assume it's exactly right. It's irrelevant. The conditions in the Archaen Eon or the Cretaceous Period tell us nothing about what would be good for *us* and the other species that currently inhabit the Earth. What matters to species living *now*, including us, is what would is typical of he Quaternary Period. Those are the conditions we've evolved to survive in.

There isnt meant to be ice on the surface.

Says who? In any case, there have been at least four major ice ages prior to the Quaternary Period (past 2.6 million years). The Quaternary represents less than 1% of the time in which the Earth has had major ice sheets.

Even the rate of change isn't unique in Earth's storied history.

This is just wrong. Since 1990 the rate of change of global average temperature has been around 2 C / Century. It's never reached 1 C / Century before on a global basis so far as we know. Of course there have been regional events in the 1 C/ Century range, like the termination of the Younger Dryas, but that high rate of warming was regional and those regions experienced catastrophic mass extinctions.

There is no "right" temperature for the Earth. There isn't even a "normal" one. Humans, if we continue to exist as a species for another million years, will experience states of the Earth that will look very alien to us, and they'll like it that way because that's what they'll be used to. The problem for us right now is the rate of change is beyond what we would experience as economic *stress*, and well beyond the levels that triggered mass extinctions in the geologic past.

Slashdot Top Deals

One good reason why computers can do more work than people is that they never have to stop and answer the phone.

Working...