Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:Some background would be helpful (Score 1) 33

Well, under some conditions an unique movie car *would* be copyrightable. The case where the car is effectively a character is just one of the ways you can argue a car to be copyrightable.

Copyright is supposed to protect original creative expression, not ideas or functional items, which may be protected by *other* forms of intellectual property like trademark or patents. This is because copyright protects *creative expression*. It doesn't protect ideas, or functional items. A car is a functional item, so *normally* it isn't protected. But to the degree a car in your movie has unique expressive elements that are distinct from its function, those elements can be copyrighted.

But the plaintiff still wanted to claim that he owned the design of the car, so his lawyer looked for a precedent that established that cars can sometimes be copyrighted even though they are functional items, and he found the Batmobile case, where the Batmobile was ruled to be a prop that was *also* a character. Because he cited this case, the judge had to rule whether the Batmobile ruling's reasoning applied to this car, and he decided it didn't. The car may be unique and iconic, but that's not enough to make it a character.

Comment Re:If AI were an employee (Score 1) 23

Sadly, based on experience I think you are wrong. Employees who screw up are often not fired, or are replaced with employees just as bad.

There's a reason there's a common saying that "You pay peanuts, you get monkeys." It's because it's very common for employers to accept mediocre or even poor work if the employees doing it are cheap enough. I'm not anti AI -- not even generative AI. I think with AI's ability to process and access huge volumes of data, it has tremendous potential in the right hands. But generative AI in particular has an irresistible appeal to a managerial culture that prefers mediocrity when it's cheap enough.

Instead of hiring someone with expensive thinking skills to use AI tools effectively and safely, you can just have your team of monkeys run an AI chat bot. Or you can fire the whole team and be the monkey yourself. The salary savings are concrete and immediate; the quality risks and costs seem more abstract because they haven't happened yet. Now as a manager it's your job to guide the company to a successful future, but remember you're probably mediocre at your job. Most people are.

According to economics, employers stop adding employees when the marginal productivity of the next employee drops to zero. What this means is that AI *should* create an enormous demand for people with advanced intellectual skills. But it won't because managers don't behave like they do in neat abstract economic models. What it will do is eliminate a lot of jobs where management neither desires nor rewards performance, because they don't want anything a human mind can, at this point, uniquely provide.

Comment Re:Yes, but no.. (Score 4, Insightful) 115

It's not clear to me that the people making the hiring/firing decisions, and deciding how many programmers can be replaced by AI, know the difference between the copy-paste coders you're talking about and the people who are doing the harder things.

And, given the way they think, and given the fact that all of us are subject to a whole host of cognitive biases, some places at least are likely to want to keep on the cheap copy-paste types than the more expensive senior programmers.

Short term, things will look good. Quarterly reports will be up. It will take longer for companies to realize that they've made a mistake and everything is going to shit, but because of the emphasis on quarterly returns, plus because all of these companies are caught up on the groupthink bandwagon of the AI evangilists, a lot of them as institutions may not be able to properly diagnose why things went to shit. (Even if individuals within the institutions do.)

I'm in science (astronomy) myself, and the push here is not quite as overwhelming as it is in the private sector. Still, I've seen people who should know better say "an AI can just do that more efficiently".

Comment I really miss Netflix DVD (Score 5, Insightful) 99

Somewhere early on in the video rental business back in the 80s, there was established a legal precedent that production companies couldn't forbid rental of anything they'd released on video. That carried over to DVD. Eventually we had Netflix DVD, which was superior to video rental stores because of its gigantic selection. Usually (though not always) what you wanted to rent was in stock. Yeah, there was a two day (or so) delay between deciding to watch something and getting to watch it, which we don't have with streaming. But one subscription got you pretty much everything.

Alas, the open renting thing did not transfer over to streaming, so now you have to subscribe to n different services to be able to get what you want on a whim -- undermining at least part of what streaming promised. And, stuff moves between services all the time. This is even before we talk about how crappy the discovery tools within one stream service is.

It was a sad day when Netflix DVD closed down.

Comment Re:Duh! (Score 1) 68

I think we should make a distinction between "AI" and "AGI" here. Human intelligence consists of a number of disparate faculties -- spatial reasoning, sensory perception, social perception, analogical reasoning, metacognition etc. -- which are orchestrated by consciousness and executive function.

Natural intelligence is like a massive toolbox of cognitive capabilities useful for survival that evolution has assembled over the six hundred million years since neurons evolved. The upshot is you can reason your way to the conclusion that you can remove the blue block from underneath the red block without disturbing the red block, but then metacognition uses *other* mental faculties will overrule this faulty conclusion. Right there you have one big gap between something like an LLM and natural intelligence (or AGI). LLMs have just one string to their bow, so they can't tell whether they're "hallucinating" because detecting that requires a different paradigm. The narrow cognitive capabilities of generative AI means it requires and engaged human operator to use safely.

For many decades now I've heard AI advocates claim that the best way to study natural intelligence is to try to reproduce it. I always thought that was baloney: the best way to study natural intelligence is to examine and experiment with animals that possess it. What AI researchers try to do is to write programs which perform tasks that previously could only be done by humans. That's why when any AI technique starts to work, it's not AI anymore. But these tasks are so restricted, and the approach taken is so uniformed by actual psychological research, that I doubt that these limited successes tell us anything about natural intelligence.

Until, maybe, now. Now that AI has reached a point of unprecedented impressiveness, I think it teaches us that reductive approach to AI we've been taking won't generate systems that can be trusted without human oversight. That doesn't mean these systems aren't "AI" by the field's own Turing test.

Comment Re:Yeah, no shit, Sherlock. (Score 3, Insightful) 57

Anyone who has studied the Earth's climate knows we are in the bottom 90% of temperatures over time, and we are exiting an interglacial, and that the earth is unstable at this cold temperature.

A bit of an exaggeration, but let's assume it's exactly right. It's irrelevant. The conditions in the Archaen Eon or the Cretaceous Period tell us nothing about what would be good for *us* and the other species that currently inhabit the Earth. What matters to species living *now*, including us, is what would is typical of he Quaternary Period. Those are the conditions we've evolved to survive in.

There isnt meant to be ice on the surface.

Says who? In any case, there have been at least four major ice ages prior to the Quaternary Period (past 2.6 million years). The Quaternary represents less than 1% of the time in which the Earth has had major ice sheets.

Even the rate of change isn't unique in Earth's storied history.

This is just wrong. Since 1990 the rate of change of global average temperature has been around 2 C / Century. It's never reached 1 C / Century before on a global basis so far as we know. Of course there have been regional events in the 1 C/ Century range, like the termination of the Younger Dryas, but that high rate of warming was regional and those regions experienced catastrophic mass extinctions.

There is no "right" temperature for the Earth. There isn't even a "normal" one. Humans, if we continue to exist as a species for another million years, will experience states of the Earth that will look very alien to us, and they'll like it that way because that's what they'll be used to. The problem for us right now is the rate of change is beyond what we would experience as economic *stress*, and well beyond the levels that triggered mass extinctions in the geologic past.

Comment Re:Maybe biologists and doctors should consider (Score 2) 49

You are proposing scientists terraform the Earth -- something we're centuries, if not millennia from knowing how to do.

Take a single cubic meter of dirt from your back yard. That is practically a world in itself, far beyond the capabilities of current science to understand. That's because there are millions of organisms, and thousands of species interacting there and billions of chemical interactions per second. Of the microbes, only about 1% of the species can be cultured and studied in a lab, the rest are referred to as "microbial dark matter" -- stuff we can infer is there but have no ability to study directly. If you gave scientists a cubic meter of ground up mineral matter that was completely inorganic, they would be unable to "terraform" it into soil like you get from your yard -- not without using ready made soil like a sourdough starter.

Terraforming as a sci-fi trope is 1940s and 50s authors imagining the obsolete land management practices of the time -- "reclaiming" (filling) wetlands, introducing "desirable" species, re-engineering watersheds like the ones feeding the Aral Sea -- then scaling them up to planetary scale. The truth is we can't even terraform a bucket of dirt yet; an entire planet is as far beyond our scientific capabilities at present as faster than light travel.

In any case "beneficial" microbes you're talking about are already out there. The problem is that conditions are changing to allow "detrimental" microbes to outcompete them. And there's a 99% chance the microbes in question are microbial dark matter that we can't effectively study. Maybe we need a Moon shot program to understand microbial dark matter. Chances are such a program would pay for itself in economic spinoffs. But I don't see any new major scientific initiatives in the current political climate.

Comment Re: Can anyone say LLMs? (Score 1) 85

I think this is true only if you are comparing the LCOE of natural gas to solar *with storage* in the US. A plain solar farm without storage is going to be cheaper. We really should look at both with and without storage, because they're both valid comparisons for different purposes, although PV + storage is probably the best for an apples-to-apples comparison.

The cost of solar has come down year after year for the last thirty years, to the point that *internationally*, at least, the LCOE for solar PV plus storage is now just a little bit less than the LCOE for natural gas, and is expected to become *dramatically* cheaper by the end of this decade, according to IEA. Even if they are calculating somewhat optimistically, if solar costs continue to drop it's only a matter or time before solar PV plus storage becomes cheaper than natural gas, even in the US with its cheaper gas.

The wild card here is any political actions taken to change the direction this situation is going. Internationally, low PV prices are driven by cheap Chinese suppliers, and it doesn't seem likely we'll see large scale US domestic solar production in the next five years. In the meantime we have an unstable situation with respect to tariffs on PV components and lithium for batteries. Until sufficient domestic sources of lithium come on line, uncertainty about tariffs will create financial problems for US manufacturers and projects.

Comment Welcome to the 21st Century. (Score 4, Informative) 17

The molecular basis for epigenetics was discovered in the 1980s and for the past thirty years or so non-genome-based inheritance has been a pretty hot scientific topic.

This only seems surprising because for most of us our biology education ends with 1953, when the structure of DNA was discovered. We didn't learn about epigenetics (1980s) or retroviruses (1970s) or horizontal gene transfer (discovered in the 20s but importance was only realized in the 90s).. The biological world is full of weird, mind-blowing stuff most people never heard of.

Comment Re: Chances are (Score 1) 90

The ethics module is largely missing in humans too.

Philosophical ethics and ethical behavior are only loosely related -- rather like narrative memory and procedural memory they're two different things. People don't ponder what their subscribed philosophy says is right before they act, they do what feels ethically comfortable to them. In my experience ethical principles come into play *after* the fact, to rationalize a choice made without a priori reference to those principles.

Slashdot Top Deals

It has just been discovered that research causes cancer in rats.

Working...