Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re: Definition of car vs SUV (Score 1) 254

It's not an SUV unless it can comfortably go up an old logging road with exposed rocks embedded in the road and many potholes, on a mountain.

A Pathfinder was an SUV, as was a Forerunner, or a Range Rover.

These new "SUVs" (grocery getters, kid soccer transporters) are a marketing joke that we all seem to have accepted. Those are cars. with a slightly more upright seating position.

Comment Yes, AI is thinking (Score 1) 289

The transformers used by LLMs discover and encode the relationships between words IN HUMAN THOUGHT EXPRESSIONS. The multi-layer nature of the deep neural net into which the transformers encode these relationships results in increasing levels of abstraction of the relationships being encoded in different layers of the neural net, along with the ways and strength that each more abstract relationship-aspect is involved in each more particular relationship. By focus on contextual relationships that humans attend to in their communications, and by having the ability to capture abstract aspects of the relationships, the LLM is effectively learning a semantic network of the concepts that words and word sequences utttered by humans represent. The carefully hierarchically organized and represented statistical properties of human syntax (when averaged over many many uttereances so as to ignore accidental, non-essential differences in expression) ultimately result in a usable (cheaply associatively tourable) representation of the semantics (MEANING) behind human communications. i.e. a general and specific knowledge base has been represented in the deep neural net.

It is then possible to design and implement various query-driven and goal-directed associative touring algorithms to visit the concepts in the knowledge base in appropriate touring orders (nearby concepts are nearby in the neural net representation, more general ones are "above", more specific ones are "below") so that we could say these algorithms are thinking abouit the queries and appropriate answers to them.

Comment To quote a more humble version of Descartes (Score 1) 186

I think I think therefore I think I am.

But seriously, as some have said here, there appears to be no possible scientific test of whether subjective consciousness is present or being faked.

Therefore one is led to question whether that distinction is meaningful, or matters at all.

This remains the hard problem of consciousness.

My fascination with AI is for us to develop information processing (and storing) technology which helps us answer exactly how far you can go with a created "thinking" mechanism which we cannot demonstrate has subjective consciousness. How far you can nevertheless go with creating general cognition, including introspective cognition, and cognition guided and prioritized by emotion-like symbolic tags which encode and prioritize ego-desired vs ego-undesired outcomes in experience-encoding world models and also hypothetically future-extended world models. Does any of this require subjective consciousness? I would certainly wager not. Does subjective consciousness spontaneously emerge, if enough such general, associatively-interlinked information storage and processing is occurring (a la Tononi)? Who knows. How would we tell?

Comment Re: LLM can't do what isn't programmed in to it (Score 1) 186

This statement shows a profound misunderstanding of how computational complexity works.

Yes, computer algorithms can output things that are not (intentionally) programmed into them.

There are two or three secret magic things which result in that happening:

The first is (effectively) random input. An algorithm may be attached to an input device (e.g. a sensory device, or text inputter aimed at most of all human Internet text expressions...) which sends the algorithm an effectively random sequence of input data.

The second is loops which can be designed to feedback a functional result of one piece of random input as input to the next processing round so that the first random data helps determine the direction/extent of algorithmic processing of a subsequent different random input. I say the feedback loop "helps" determine the further processing, because a third magic element, the conditional statement, branches the processing (in the loop) this way and that depending on this or that small detail of the (effectively random, remember) input or the (even more complex and completely unpredictable) combination of past and present inputs.

No programmer could possible keep up with the complexity of how this looping, conditional execution with random input sequences will go. No programmer can, in general, know where this kind of processing will end up, and with what result, nor even, can they predict if the program will keep looping or stop with a result.

Finite, simple algorithms, particularly with random input sequences (although those is not even strictly necessary) can generate arbitrary, unpredictable results. There is actual well-known math behind what I've tried to say in lay terms here.

Comment Well most people apparently believe in sky fairies (Score 1) 78

(i.e. god)

i.e. people on average aren't empirical or rational at all, but rather, mostly tribal-social.

so it's not surprising that people with ferretman's climate-denial beliefs abound.

People don't actually know HOW to believe properly (i.e. how to use scientific method to weight and adjust their beliefs), never mind what to believe.

Comment Re:We're so fckd (Score 1) 46

Or perhaps some inexpensive to scale batteries:

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fenergydome.com%2Fco2-battery%2F

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fblog.google%2Foutreach-initiatives%2Fsustainability%2Flong-term-energy-storage%2F

Comment You're kind of missing the (inflection) point (Score 1) 73

There will come a time, and it's almost certainly within a generation from now, when AI and automation will be better than let's say 90% of humans at the things that those humans do in the economy. 90% figure pulled out of orifice. Could be 80%, could be 99%. Doesn't qualitatively matter from a socioeconomic problem and policy perspective.

And the AI and automation will also be better than 90% of humans at whatever new "replacement" jobs people or AIs come up with as a spin-off of the new automated economy.

This is qualitatively different than many previous technological revolutions, in that NO faculty/capability of the average human will be more effective for work than the new automation.

Previously, when strong men wielding shovels and scythes were replaced by combine harvesters, those men could go on to another job, like construction, or plumbing, or computer programming, in a new urban economy. Horsemen and stablehands could become truck or taxi drivers and mechanics or oil workers or gas jockeys.

When hand weavers were replaced by looms, they could go on to become administrative functionaries or apparatchiks in some business or bureaucracy.

Soon, almost all of the replacement jobs or new functions needed in a re-jigged automated economy will themselves be more cost-effectively done by more AI and more automation. That is the difference.

If you get that (how it really is different this time), then you may be agile of mind enough to be one of the last to be replaced. If you can't grasp it, well good luck to you my fine fellow human.

Comment Re:Coal vs batteries (Score 2) 46

You're saying getting rid of coal power will make no measurable difference to CO2 at all?

Just so I have that clear what you're saying.

Lifecycle emissions analysis:
    Solar PV electricity + Li-ION BESS is on the order of 50 grams CO2-eq embedded emissions per kWh of electric energy.
    Coal electricity is on the order of 1,000 grams CO2-eq per kWh.

So coal is 20 times more CO2-emitting per unit of energy generated.

Yet you're claiming that if Australia, and other countries in the world reliant on coal-fired electricity, switched to renewables + energy storage there would be no measurable impact on CO2 emissions to the atmosphere.

Things that make you go hmmmmm.

Comment If only there was a way (Score 4, Insightful) 46

to hold these clowns and their descendants directly responsible for part of the costs of human economic and health losses due to anthropogenic global heating and consequent extreme and shifted climates and eco-system disruptions and impoverishments.

There should be a reverse trust-fund to allocate the blame, where leaders act against global heating mitigation in full knowledge (see IPCC reports) of the consequences.

Maybe that would stop a few from making grievously irresponsible decisions like this one,.

Comment Re:We're so fckd (Score 1) 46

There are several ways to make better use of excess peak solar power.

1) You can use 2000 to 3000km HVDC power transmission lines to ship it to where it's needed e.g. ship the power east west to different timezones where it's a different time of day. e.g. ship mid-afternoon peak solar to a place where it is late afternoon/evening hours peak demand.

2) You can use li-ion (or soon, sodium-ion) batteries, or vanadium redox flow batteries or liquid/gaseous CO2 storage, or thermal storage, or even very inefficient hydrogen electrolysis, storage, and fuel cells to store the excess energy til time of day needed. Even if you only store and return 40% to 80% of the excess energy, that's much better than curtailing the solar PV production due to negative prices or grid imbalances.

3) You can incent and signal and computer-control (energy manage) smart loads (such as EV chargers, HVAC/AC units, industrial loads) to consume more power during peak solar times and less when less solar is available.

Just use all three of these methods together and bingo bango bongo problem pretty much solved. All tried and tested and available technologies.

Comment Re: The days of stupidity in the US are over (Score 1) 224

Oil: good, guns: good! Oily guns: Best! Solar panels: Commie!

You've got to admit, the man is a comedic genius.

He's the no holds barred WWE heavyweight champion of the world in political comedic pantomime theatre.

It will be boring when he's gone... and of course uncomfortably hotter and wildfire smoky or flooded depending on where you live, but hey... that's what Americans voted for right?

Comment Re: AIs do not think. (Score 1) 121

The machine cannot understand.
But the computational process running on the machine (which considers some information, then decides based on that information which information to consider next) can.

It is the same (at sufficient abstraction level for comparison) with us.

What are we if not biological machines capable of complex information storage and processing?

Comment Re: AIs do not think. (Score 1) 121

You are wrong about this. It turns out that the associatively representing the statistics of syntactic relationships in large corpuses of written human expression is essentially capturing the semantics of the concepts and conceptual relationships of most interest and concern to humans in our collective history of learning and thinking about the world.

Thus the AIs are able to to traverse a model which is essentially a semantic network or knowledge base about the world and its entities, relationships, processes, situations, as perceived and prioritized by humans.

They have what they need to start thinking.

Slashdot Top Deals

A sine curve goes off to infinity, or at least the end of the blackboard. -- Prof. Steiner

Working...