Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re: AIs do not think. (Score 1) 121

The machine cannot understand.
But the computational process running on the machine (which considers some information, then decides based on that information which information to consider next) can.

It is the same (at sufficient abstraction level for comparison) with us.

What are we if not biological machines capable of complex information storage and processing?

Comment Re: AIs do not think. (Score 1) 121

You are wrong about this. It turns out that the associatively representing the statistics of syntactic relationships in large corpuses of written human expression is essentially capturing the semantics of the concepts and conceptual relationships of most interest and concern to humans in our collective history of learning and thinking about the world.

Thus the AIs are able to to traverse a model which is essentially a semantic network or knowledge base about the world and its entities, relationships, processes, situations, as perceived and prioritized by humans.

They have what they need to start thinking.

Comment Requirements and design skills? (Score 1) 121

The question that should concern us in the software dev field is how well are the best AIs progressing at
1. eliciting requirements from humans who have various degrees of vague understanding of what they want and even less understanding of what they really need, and
2. translating requirements into a system design with good characteristics (performant, future-proof, scalable architecture and component/tool/framework choices, fit-for-purpose, maintainable, adaptable design etc.)

Comment Inductive fallacy i.e. boiling frogs (Score 1) 121

You know the story of the frogs (which are cold-blooded) sitting in a pot of water and thinking everything is fine as the temperature of the water rises and rises, until one minute, the water boils. i.e. an analogy about the dangers of unanticipated phase changes.

Just because you've been able to say "see, it's not different this time" several times in the past does not mean that I will be wrong when I say "It's different this time!"

The difference is that the cognitive capability and ready-to-hand, exploitable information access of AIs are approaching and surpassing or have surpassed an increasing chunk of the bell curbe of humans' capabilities this time, rather rapidly.

Comment WTF does AGI have to do with generating profits? (Score 1) 61

AGI means Artificial General Intelligence.

-Something that can learn and build compressed models and reason in arbitrary new domains.
-Something that can use analogy / isomorphism to extend knowledge gained in one domain or situation or task to another.
-Something that can build over time, both specific episodic memories and generalized models including situation models, and including mathematics, in multiple domains, and can build associative memory which includes information both on specific domains and their constituents and rules and evolutions, and on the important relationships between domains, to assist in inter-domain reasoning extension.
-Something that can construct simulations of domains and situations/problems in them to reason about them including hypotheticals and plan formation, testing, selection.
-Something that can use associative search, reasoning, simulated scenario building and exploration (imagination) to be creative.
-Something that can have and execute general goals.
-Something that can generate and refine and evolve/adapt its own goals.
-Something which can model agents in the world and their minds/goals/needs etc and can thus conceive of, form specific, and execute co-operative action plans, and can understand the societal structures (memes) of social co-operation and efficiencies and effectivenesses gained therefrom, and can incorporate these into action plans and executions and evaluations.
-Something that can introspect about all this sort of thing, which may assist in improving and focussing goals, strategies, and learning methods and other reasoning and action techniques.

That's all. Easy peasy.

Comment Re:Yep (Score 1) 86

Doing both (long low-impedance wire and more storage) would have made a lot of sense. Storage is still ridiculously expensive, and with both, you have redundancy.

The general principle of time-shifting solar PV across time zones (and geographically moving wind power output across weather systems) makes so much sense that humans will probably fail to get it done because we seem to be erring on the side of parochialism, xenophobia, and collective stupidity these days.

My cynicism is borne of long experience with the highly sub-optimal and environmentally destructive decisions we have been collectively making over the last 4 decades.

Comment Re: The cost of AI (Score 1) 361

Each new important technology tends to get commoditized and its cost per functionality decreases with both scale and innovation including supply chain innovation.
See "Simplicable Technology Commoditization Curve",
See "Wright's Law"

One random tiny but concrete meaninful example: E.g. in AI there is already a technique whereby energy-expensive-to-query large models can be replaced by much smaller distilled models. See what deepthink did based on expensive American LLM models.

Slashdot Top Deals

Consider the postage stamp: its usefulness consists in the ability to stick to one thing till it gets there. -- Josh Billings

Working...