Comment Re:Zero now for sure (Score 1) 165
Those aren't really reactionless drives. They're pushing on the Earth. The Earth is the reaction mass.
Those aren't really reactionless drives. They're pushing on the Earth. The Earth is the reaction mass.
It works fine. Windows 10 does too.
40% of desktop computers sold in 2015 had SSDs, and lots of those got upgraded too. But Windows 11 requires CPUs that didn't exist until 2017 and 2018, and weren't dominant until later still, so we're not really talking about "10 year old hardware."
That's a reasonable summary, although I'm not sure it's exactly what Searle himself meant. The weak point is pretty obvioius though:
2. Intelligence (by any reasonable understanding of it) involves understanding the deeper meaning, the context, the nature and qualia of the thing being considered. The semantics as Searle puts it.
"Deeper meaning" is a big red flag. If you start by assuming "deeper meaning" is something mystical, uncomputable, or non-physical then indeed, there is, by definition, no possiblity of "connecting it" to anything computable or physical. QED. This approach isn't very interesting though, and hasn't been very fruitful. God directs the planets in the sky as He wills. Go home and pray, there's no more to be done here. Ironically, such assumptions imply that these things are beyond understanding, "deeper" or otherwise.
If, on the other hand, you define intelligence in a usable way, you can start asking interesting questions.
Consider the method that poster mentioned but reverse it. Take an individual neuron and study its response to all possible inputs. This is practical to do for reasonable subsets of "all possible inputs", and in fact done all the time. There's randomness that makes the precise output unpredictable precisely, but nothing really surprising, nothing we can't pretty easily duplicate in a device or software. Now start connecting more than one of those neurons together. The IO function gets more and more complicated, but all we're doing is adding fairly simple, well-characterized units, either biological ones or synthetic. At some point, with the biological units at least, we all say "oh, now it's 'intelligent.'" That point differs between people, but if it gets big enough all of us agree it's intelligent.
This is perfectly straightforward if you define "intelligent" as an IO transfer function of sufficient complexity. If you do so, then there's a nice proof that an artificial neural network of at least two layers, sufficient number of units, and nonlinearity that can form a spanning set (all the common ones except identity work fine) can learn any transfer function.
There's no reason not to adopt this definition, and lots of reasons to do so, except that lots of people don't like it. There's no room left for magic. People had the same problem with Newton's mechanics and the "clockwork universe." People still bring up quantum mechanics to save them from it, but all QM adds, even in the most generous interpretation, is randomness. Which, by the way, ANNs love.
You can also go the other way. We can make devices that replace part of the brain and restore functionality. I don't believe any artificial hippocampuses have been implanted in humans yet, but there are a fair number of animal, including monkey, studies.
Then you can ask real questions. How complicated is the transfer function for various levels of intelligence? How can it most efficiently be learned? What configurations of basis functions are most efficient? One important result that's come out of questions like these, and the reason for all the current AI stuff, useful and stupid, is that computational depth can provide exponential gains in efficiency. Thus "deep learning."
What intelligence is, like consciousness, is poorly defined
It's not really. Both words have good definitions. People who claim there aren't any good definitions just don't like any of them because they aren't mysterious enough or make some things intelligent/conscious they don't like.
People would not generally consider someone working through it as intelligent, which would suggest that no machine could be what people would consider to be intelligent.
You are making an enormous leap there, the same one people assume Searle did (he didn't). You could also work through the input/output function of a single neuron by hand but most people wouldn't consider that intelligent. They also wouldn't consider the single neuron itself intelligent. But put enough of them together, with "enough" being part of that definition problem up above, and they would. Why not the same with your program?
The dissatisfaction you mention is the cognitive dissonance between everything we know about the world and our deeply held belief that we are special and possess souls, free will, magical brains, second substance, whatever.
Yes. Static but not stable is what I intended by "you've got to really, really want it."
You could also come up with more or less plausible stable configurations if the strength of dark energy were related to the total energy of the universe, or to the size of the universe with something less than the constant per unit volume the cosmological constant model assumes.
Lord of the Flies then. Or any internet community ever.
When people start calling it a "community" seems to be the first sign of the apocalypse. Right after that comes "for the good of the community" and then it's definitely time to move.
If you can make dark energy however strong you want you can come up with a static configuration. It's an inflection point though, so you've got to really, really want it.
That's why the summary says "the standard model of cosmology." Most of the commenters are too lazy to write all five words though.
Einstein wasn't "Einstein" either. He was a great synthesizer who came along at the right time to put a bunch of existing stuff together in a nice package, explained well. He was not a lone genius who appeared out of nowhere to single handedly revolutionize physics. That's a myth invented because humans have a bias towards personality cults.
Most of the things pop science associates with Einstein are properly credited to others, some contemporary and some preceding him by many years. Some of them are concepts Einstein didn't even like.
What you've said doesn't make sense.
Expanding space explains the expansion of the universe. Prior to the late 1990s it was sufficient to describe what cosmology had observed, and we expected that the expansion should be slowing down due to gravity. Then better observations showed that it was actually speeding up.
The "expanding balloon" thing is part of the current standard model of cosmology as it was part of its predecessor. Dark energy is someone actively blowing up the balloon.
Constant dark energy isn't "explained" by the models either. These things are parameters of the models and the current one uses the simplest parameters compatible with observations. As you say, as parameters for curve fitting.
Their senior editor was certainly faking it.
And if you did that then Searle's "but you wouldn't consider that intelligent" becomes much weaker.
The Chinese Room argument rests firmly on that phrase: "but you wouldn't consider that intelligent, would you?"
If it's magic then we're SOL. I don't see much point in considering that possibility. It's not interesting.
Searle's Chinese Room is misunderstood, especially today.
The essential feature of the Chinese Room is that the translation book doesn't change. Searle sort of associated that with "programs" and AI, which was fairly reasonable to do in the 1980s when the dominant paradigm for AI involved compiling a massive database of facts and applying the right rules to manipulate those facts. Under such restrictions it's reasonable to assume that "true" intelligence is impossible.
Real computers have access to modifiable memory and input though, and today's dominant AI paradigms emphasize learning, i.e. exactly the thing the Chinese Room analogy holds up as being critical.
Searle states right in the abstract to his Chinese Room paper that a machine with equivalent "causal powers" to "brain stuff" could be intelligent. He doesn't seem to have ever said what he thought these causal powers were, or what was special about brain stuff that imbued it with them. Some of his writing suggests he thought it was indeed pretty special, but he also doesn't seem to have been a blind member of the "my brain is magic" club either. You can consistently interpret his philosophy as consistent with "causal powers" being equivalent to the capability for adaptation in response to environmental input.
You can measure a programmer's perspective by noting his attitude on the continuing viability of FORTRAN. -- Alan Perlis