Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re: Robotics (Score 1) 238

You seem to be thinking about what kind of problems a system can solve from the perspective of the theory of computation. This is all fine, obviously. But if you then equate problem solving with intelligence, you are begging the question whether or not human intelligence is computational.

To be more direct: you're confusing the computational definition of "problem solving" (Turing completeness) with the practical definition of "problem solving" (intelligence).

I'm trying to argue that computational-problem-solving and practical-problem-solving do overlap, but are nevertheless distinct enough so that failing to achieve Church-Turing levels of computational-problem-solving, does not imply failing to achieve human levels of practical-problem-solving.

Searle's Chinese Room argument is famously about understanding language. I don't see why that's relevant in our discussion, but you don't seem to know that either so I think we'd better let that rest.

Comment Re: Robotics (Score 1) 238

In a previous post, you said:

"As it happens, the philosophers did provide us with a really neat argument against computationalism that comically upsets people in a way that feels very much like a bishop lashing out at a blasphemer."

Could you please elaborate on this? I feel our disagreement revolves around whether or not human intelligence is based on computation in the mathematical sense, or not.

Comment Re: Robotics (Score 1) 238

> If you don't care, then why did I waste all this time?

Obviously I can't speak to the why, but I'd like you to know that I am grateful for your time and insight and have very much enjoyed our discussion.

> This started, if you remember, when you asked me why I believe that LLMs are a 'dead end' as far as AGI is concerned.

Yes, and then we explored neural networks with respect to the theoretical foundations of computation.

And then I made a comparison to human brains, to argue that Turing completeness can't be necessary for AGI. Which means that whether or not LLMs are Turing complete, is immaterial to whether or not LLMs are a dead end for AGI.

> Anyhow, AGI is science fiction

Everything is, until it happens.

Comment Re: Robotics (Score 1) 238

> scaling the machine vs. scaling resources

I agree with your intuition. Using arbitrary precision reals feels like a hack, compared to increasing the functionality of the machine by adding a stack or a tape.

> in the end it's still just generating text probabilistically, one token at a time

But I think when it comes to AI, our intuition on how a system should be designed might be misleading. I mean, I *like* how simple a neural network is. The simplicity feels *right*, when we're trying to build artificial brains. After all, our brains are just interconnected cells exchanging electro-chemical signals. Even just trying to see what works, mixing in specialized models, without any formal foundation seems very... natural. So maybe what our brains are doing isn't computation in the mathematical sense either? Maybe we're just generating text (or imagined future world-states) probabilistically? So what?

Comment Re: Robotics (Score 1) 238

I don't really see why you feel so strongly about (the impracticality of?) arbitrary precision reals, but not about the (equally impractical?) unbounded memory of Turing machines. You're obviously more knowledgeable than I am in this area, but don't they serve essentially the same purpose: storing state?

Anyway, I asked ChatGPT-4 to balance a string of 40 parentheses, which I can't show you because of /.s junk filter, but it failed ;-) I *did* balance a string of 8 parentheses correctly though. I don't have time to find the exact amount were ChatGPT starts to break down, but it's interesting it seems to be able to do it for small strings while it's probably not trained on this task specifically.

Furthermore, I'm not so sure modern LLMs really do have exactly one loop. Don't modern LLM architectures consist of multiple models: a generator, a supervisor, etc.?

Comment Re: Robotics (Score 1) 238

Hehe. :-) Thanks, thatâ(TM)s interesting. I can see why that would support your argument that NNs are fundamentally limited, at least in practice. But, (and I know this is not the argument I was making originally) we should remember weâ(TM)re dealing with highly abstract concepts: Turning Machines and idealized neural networks. I mean, arenâ(TM)t you saying something like âoeThis Von Neumann architecture is well and good but I havenâ(TM)t seen a Turing complete implementation yet!â ?

Comment Re:Robotics (Score 1) 238

Thanks for your thoughtful and elaborate reply. I did study computer science in university in the early 2000s and I'm familiar with the Chomsky hierarchy of formal languages. I have not kept up with the scientific literature, but recent articles about LLMs in the popular press suggest that LLMs are encoding the *rules* of arithmetic because for a complex (multi-layered, recurrent) network this is more efficient than storing the actual input/output pairs. This suggests to me, something else is going on ("emergence" is a big word... but still).

Nevertheless, the fundamental properties of neural networks are still fundamental, right?

Or maybe not.

May I recommend this article to you?

"On the Turing Completeness of Modern Neural Network Architectures"

Jorge Pérez, Javier Marinkovi, Pablo Barceló

Alternatives to recurrent neural networks, in particular, architectures based on attention or convolutions, have been gaining momentum for processing input sequences. In spite of their relevance, the computational properties of these alternatives have not yet been fully explored. We study the computational power of two of the most paradigmatic architectures exemplifying these mechanisms: the Transformer (Vaswani et al., 2017) and the Neural GPU (Kaiser & Sutskever, 2016). We show both models to be Turing complete exclusively based on their capacity to compute and access internal dense representations of the data. In particular, neither the Transformer nor the Neural GPU requires access to an external memory to become Turing complete. Our study also reveals some minimal sets of elements needed to obtain these completeness results.

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Farxiv.org%2Fabs%2F1901.034...

Comment Re:Collectivism wasn't the problem (Score 2, Interesting) 238

I agree with conclusion that the transition to a Star Trek style socialist utopia will be slow, boring and peaceful. I believe it will be inevitable. The larger historical trend (200k years) is evident. But we won't be happy, since human being define their reality through misery and suffering. In utopia, we will suffer as much from the anxiety that our food will be slightly less perfectly prepared than our neighbour's, as medieval humans suffered from famine.

If you want a picture of the future, imagine a perfect world, with people complaining about everything, forever.

Comment Re:Robotics (Score 2, Insightful) 238

Yeah go ahead buddy "concentrate on real AGI". Concentrate real hard. You're an idiot and you don't know what you're talking about. Ever since the beginning of computing, Alan Turing made the following argument:
1) we don't have a rigorous definition of "general intelligence",
2) we *do* know that our goal is to build a human-like intelligence (AGI), and
3) even without a rigorous definition, we *can* recognise human-like intelligence in other people, by interacting with them.
From this follows Turing's pragmatic approach: once we can't determine with a better-than-chance probability if an agent we're interacting with is a human or a computer system, we must concede the computer system has human-like intelligence. This is the Turing test. And guess what? Both Google's Lambda and GPT passed this test in 2022. This is not a fad. This is our first real shot at AGI. Putin is right to be worried.

Comment Introduction of regulation also (Score 1) 35

The summary describes the goal of this lobby group as "reining in lawmakers and regulators". It is important to remember that Facebook, and companies like it, also lobby to *introduce* new legislation. Often under the guise of 'creating a level playing field', new regulation serves to increase the barrier of entry for competitors. Facebook has the cash and organizational capabilities to hire lawyers and implement corporate compliance processes. Your average guys-in-garage-startup have not.

Comment washed up (Score 1) 211

they're not too big. they're washed up and void of innovation. amazon keeps coming up with crazy stuff. changing direction every few years to invent the next big thing. they started as selling books. then they became the largest online retailer, and now they sell infrastructure/cloud space?

Slashdot Top Deals

If you didn't have to work so hard, you'd have more time to be depressed.

Working...