Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re: Would anyone have noticed? (Score 0) 57

I own a tiny indie studio in Chicagoland and my peers own the some of the huge studios in Chicagoland.

Cinespace is dead right now. It has ONE show active. The other studios are so dead that they're secretly hosting bar mitzvahs and pickleball tournaments for $1500 a day just to pay property taxes.

My studio is surprisingly busy but I'm cheap and cater to non-union folks with otherwise full time jobs.

Comment Re:Some background would be helpful (Score 1) 33

Well, under some conditions an unique movie car *would* be copyrightable. The case where the car is effectively a character is just one of the ways you can argue a car to be copyrightable.

Copyright is supposed to protect original creative expression, not ideas or functional items, which may be protected by *other* forms of intellectual property like trademark or patents. This is because copyright protects *creative expression*. It doesn't protect ideas, or functional items. A car is a functional item, so *normally* it isn't protected. But to the degree a car in your movie has unique expressive elements that are distinct from its function, those elements can be copyrighted.

But the plaintiff still wanted to claim that he owned the design of the car, so his lawyer looked for a precedent that established that cars can sometimes be copyrighted even though they are functional items, and he found the Batmobile case, where the Batmobile was ruled to be a prop that was *also* a character. Because he cited this case, the judge had to rule whether the Batmobile ruling's reasoning applied to this car, and he decided it didn't. The car may be unique and iconic, but that's not enough to make it a character.

Comment Re:If AI were an employee (Score 1) 23

Sadly, based on experience I think you are wrong. Employees who screw up are often not fired, or are replaced with employees just as bad.

There's a reason there's a common saying that "You pay peanuts, you get monkeys." It's because it's very common for employers to accept mediocre or even poor work if the employees doing it are cheap enough. I'm not anti AI -- not even generative AI. I think with AI's ability to process and access huge volumes of data, it has tremendous potential in the right hands. But generative AI in particular has an irresistible appeal to a managerial culture that prefers mediocrity when it's cheap enough.

Instead of hiring someone with expensive thinking skills to use AI tools effectively and safely, you can just have your team of monkeys run an AI chat bot. Or you can fire the whole team and be the monkey yourself. The salary savings are concrete and immediate; the quality risks and costs seem more abstract because they haven't happened yet. Now as a manager it's your job to guide the company to a successful future, but remember you're probably mediocre at your job. Most people are.

According to economics, employers stop adding employees when the marginal productivity of the next employee drops to zero. What this means is that AI *should* create an enormous demand for people with advanced intellectual skills. But it won't because managers don't behave like they do in neat abstract economic models. What it will do is eliminate a lot of jobs where management neither desires nor rewards performance, because they don't want anything a human mind can, at this point, uniquely provide.

Comment Re:Duh! (Score 1) 68

I think we should make a distinction between "AI" and "AGI" here. Human intelligence consists of a number of disparate faculties -- spatial reasoning, sensory perception, social perception, analogical reasoning, metacognition etc. -- which are orchestrated by consciousness and executive function.

Natural intelligence is like a massive toolbox of cognitive capabilities useful for survival that evolution has assembled over the six hundred million years since neurons evolved. The upshot is you can reason your way to the conclusion that you can remove the blue block from underneath the red block without disturbing the red block, but then metacognition uses *other* mental faculties will overrule this faulty conclusion. Right there you have one big gap between something like an LLM and natural intelligence (or AGI). LLMs have just one string to their bow, so they can't tell whether they're "hallucinating" because detecting that requires a different paradigm. The narrow cognitive capabilities of generative AI means it requires and engaged human operator to use safely.

For many decades now I've heard AI advocates claim that the best way to study natural intelligence is to try to reproduce it. I always thought that was baloney: the best way to study natural intelligence is to examine and experiment with animals that possess it. What AI researchers try to do is to write programs which perform tasks that previously could only be done by humans. That's why when any AI technique starts to work, it's not AI anymore. But these tasks are so restricted, and the approach taken is so uniformed by actual psychological research, that I doubt that these limited successes tell us anything about natural intelligence.

Until, maybe, now. Now that AI has reached a point of unprecedented impressiveness, I think it teaches us that reductive approach to AI we've been taking won't generate systems that can be trusted without human oversight. That doesn't mean these systems aren't "AI" by the field's own Turing test.

Comment Re:Yeah, no shit, Sherlock. (Score 3, Insightful) 57

Anyone who has studied the Earth's climate knows we are in the bottom 90% of temperatures over time, and we are exiting an interglacial, and that the earth is unstable at this cold temperature.

A bit of an exaggeration, but let's assume it's exactly right. It's irrelevant. The conditions in the Archaen Eon or the Cretaceous Period tell us nothing about what would be good for *us* and the other species that currently inhabit the Earth. What matters to species living *now*, including us, is what would is typical of he Quaternary Period. Those are the conditions we've evolved to survive in.

There isnt meant to be ice on the surface.

Says who? In any case, there have been at least four major ice ages prior to the Quaternary Period (past 2.6 million years). The Quaternary represents less than 1% of the time in which the Earth has had major ice sheets.

Even the rate of change isn't unique in Earth's storied history.

This is just wrong. Since 1990 the rate of change of global average temperature has been around 2 C / Century. It's never reached 1 C / Century before on a global basis so far as we know. Of course there have been regional events in the 1 C/ Century range, like the termination of the Younger Dryas, but that high rate of warming was regional and those regions experienced catastrophic mass extinctions.

There is no "right" temperature for the Earth. There isn't even a "normal" one. Humans, if we continue to exist as a species for another million years, will experience states of the Earth that will look very alien to us, and they'll like it that way because that's what they'll be used to. The problem for us right now is the rate of change is beyond what we would experience as economic *stress*, and well beyond the levels that triggered mass extinctions in the geologic past.

Comment Re:Maybe biologists and doctors should consider (Score 2) 49

You are proposing scientists terraform the Earth -- something we're centuries, if not millennia from knowing how to do.

Take a single cubic meter of dirt from your back yard. That is practically a world in itself, far beyond the capabilities of current science to understand. That's because there are millions of organisms, and thousands of species interacting there and billions of chemical interactions per second. Of the microbes, only about 1% of the species can be cultured and studied in a lab, the rest are referred to as "microbial dark matter" -- stuff we can infer is there but have no ability to study directly. If you gave scientists a cubic meter of ground up mineral matter that was completely inorganic, they would be unable to "terraform" it into soil like you get from your yard -- not without using ready made soil like a sourdough starter.

Terraforming as a sci-fi trope is 1940s and 50s authors imagining the obsolete land management practices of the time -- "reclaiming" (filling) wetlands, introducing "desirable" species, re-engineering watersheds like the ones feeding the Aral Sea -- then scaling them up to planetary scale. The truth is we can't even terraform a bucket of dirt yet; an entire planet is as far beyond our scientific capabilities at present as faster than light travel.

In any case "beneficial" microbes you're talking about are already out there. The problem is that conditions are changing to allow "detrimental" microbes to outcompete them. And there's a 99% chance the microbes in question are microbial dark matter that we can't effectively study. Maybe we need a Moon shot program to understand microbial dark matter. Chances are such a program would pay for itself in economic spinoffs. But I don't see any new major scientific initiatives in the current political climate.

Comment Re: Can anyone say LLMs? (Score 1) 85

I think this is true only if you are comparing the LCOE of natural gas to solar *with storage* in the US. A plain solar farm without storage is going to be cheaper. We really should look at both with and without storage, because they're both valid comparisons for different purposes, although PV + storage is probably the best for an apples-to-apples comparison.

The cost of solar has come down year after year for the last thirty years, to the point that *internationally*, at least, the LCOE for solar PV plus storage is now just a little bit less than the LCOE for natural gas, and is expected to become *dramatically* cheaper by the end of this decade, according to IEA. Even if they are calculating somewhat optimistically, if solar costs continue to drop it's only a matter or time before solar PV plus storage becomes cheaper than natural gas, even in the US with its cheaper gas.

The wild card here is any political actions taken to change the direction this situation is going. Internationally, low PV prices are driven by cheap Chinese suppliers, and it doesn't seem likely we'll see large scale US domestic solar production in the next five years. In the meantime we have an unstable situation with respect to tariffs on PV components and lithium for batteries. Until sufficient domestic sources of lithium come on line, uncertainty about tariffs will create financial problems for US manufacturers and projects.

Comment Welcome to the 21st Century. (Score 4, Informative) 17

The molecular basis for epigenetics was discovered in the 1980s and for the past thirty years or so non-genome-based inheritance has been a pretty hot scientific topic.

This only seems surprising because for most of us our biology education ends with 1953, when the structure of DNA was discovered. We didn't learn about epigenetics (1980s) or retroviruses (1970s) or horizontal gene transfer (discovered in the 20s but importance was only realized in the 90s).. The biological world is full of weird, mind-blowing stuff most people never heard of.

Comment Re: Chances are (Score 1) 90

The ethics module is largely missing in humans too.

Philosophical ethics and ethical behavior are only loosely related -- rather like narrative memory and procedural memory they're two different things. People don't ponder what their subscribed philosophy says is right before they act, they do what feels ethically comfortable to them. In my experience ethical principles come into play *after* the fact, to rationalize a choice made without a priori reference to those principles.

Comment Re:Seriously? (Score 4, Interesting) 69

Nobody paid for it. At least nobody was charged directly. It's customary to cite grants funding research in any resulting papers, and in the case of *federal* grants it's *mandatory*. The authors simply thank the Cornell Center for Material Research for use of their rheometer and SEM. The equipment in the CCMR was purchased with NSF money, so I guess public money spent for whatever the wear-and-tear is for taking some rheometer measurements and SEM images.

If you look at the paper, it's not *really* an investigation into cutting onions. To do that you could just line people up to cut onions and have them report on the experience. It's really more about how to use experimental fluid dynamics to investigate a problem. Scientists noodle about such toy problems all the time. I had a professor back in the 80s who worked on the problem of the equation of motion of a spinning coin on a tabletop. Nobody paid him to do that, unless you count his MIT salary. The solution was eventually found by an Oxford researcher and published in a letter in Nature in 2000-- again this appears to be un-funded research. And as trivial (practically, not mathematically) as the spinning coin problem appears to be, the paper has subsequently been cited by a fair number of physics research papers, so *practically trivial* isn't the same as scientifically pointless.

So you can unclutch your pearls now. The scientists didn't pick your pocket to do a stupid experiment.

Comment Re:Chances are (Score 2) 90

No, it is a useful observation because it gives us something to look into. Just because you don't know how to negate a proposition off the top of your head doesn't mean it can't be done.

It seems quite plausible that if a LLM generates a response of a certain type, it's because it has seen that response in its training data. Otherwise you're positing a hypothetical emergent behavior, which of course is possible, but if anything that's a much harder proposition to negate if it's negatable at all with any certainty.

Slashdot Top Deals

Yet magic and hierarchy arise from the same source, and this source has a null pointer.

Working...