Comment Ah, yes. (Score 1) 31
"The first fix is free." Now that you're hooked, bend it.
"The first fix is free." Now that you're hooked, bend it.
William Shatner is a classically trained Shakespearean actor who appeared in festivals and on Broadway prior to switching from stage to television. His TOS enunciation and emphasis is due mostly to his experience with radio performances (which were over the top verbally) combined with directors on TOS constantly telling him to increase the astonishment. And in reality, wasn't anywhere near as pervasive or dramatic as the pop culture version that pokes fun at Kirk.
Nah, I kid. No one ever read.
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...
This is not surprising. This is some folks who have formalized a hypothesis that is *decades old*, and the reporters? They were born yesterday, apparently.
Yeah, but it's only aware of things like impact.
I'll agree that this is not normally called consciousness, but I believe it's the same effect. If you prefer a different definition, what is it?
There *IS* no commonly agreed upon definition that is testable. The only agreed upon definition(s) are pure handwavium.
I'll use your definition (in context) if it's testable and you explicitly define it.
By my definition every program with a logical branch is minimally conscious. Not very conscious, it must be admitted.
I don't feel that consciousness is a on/off type of property. If it's got a decision branch, it's conscious. If it's got more of them, it's more conscious. Of course, then you need to ask "conscious of what?", and the answer is clearly "conscious of the things it's making decisions based on.
That said, I'm quite willing for other people to argue based on other definitions. (Consciousness doesn't seem to have an agreed upon operational definition.) But you've got to specify what definition you are arguing from. And it's got to be a definition that is explicit and operational. (If you can't run a test on the definition, it's worthless.)
But is it true? I haven't used Gnome since Gnome2, and haven't followed it's development, so I really don't know. (Considering what they did with Gnome3 I find it believable.)
That's not universally accurate. Some of them were jokes. (Admittedly, at least one of the jokes started handling real money.)
FWIW, I don't know if they counted "in game" currencies as "cryptocurrency", but I don't know why they shouldn't. Buying a "magic sword" isn't much different from buying a link to a picture.
What they're showing is a "logical structure", not a physical one. Not a causal one. And if you think about it, that similarity is probably necessary to produce the results, so it's not happenstance. (Actually, calling it a "logical structure" is misleading, but I haven't been able to think of a better term. "LogIc", after all, if from the Greek "Logos", or "word" and originally meant something like "The set of rules for producing sentences that made sense in good Greek".)
One problem with that is that we don't know the details of biochemical reactions. They're too complex to follow. But we can observer the higher level results, like language. So we can model the higher level results, but not the causal chain that produced them.
No. That's a bad mistranslation.
What's more accurate is that everything that models any complex object will turn out to have similarities. The more complex the object being modeled, the more detailed the similarities will necessarily be.
If you think about it, this shouldn't be surprising. It's probably inherent in the term "model".
I think you're wrong. By default ALL works are copyright, even something like this text. I'm not sure you CAN avoid putting stuff under copyright, though you can use a quite permissive license.
Basically the only things that aren't copyright these days are things that are so old that the copyright has run out.
That opinions should be divergent is what should be expected. Current AIs have "jagged capability". If what you want fits where they're good, you can be overly impressed. If it fits where they're poor, you can be overly skeptical.
Actually, people have "jagged capability" also. Don't ask me to win at football. But we have reasonable models for people. We don't have reasonable models for AIs. Making the problem worse, different AIs have different capability profiles. I don't know how good Claude is at coding, but it's reportedly a lot better than Gemini. OTOH, there are other areas where Gemini is better. So you've got to pick the right tool for the right job.
This confusion is what should be expected. I remember the early days of computer use. FORTRAN and COBOL were used in *different* areas. Each one was supreme within it's own particular area. And assembler was used in yet different areas. You couldn't just pick a computer, you had to pick the appropriate computer, get it configured correctly, and you were still likely to have problems.
Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!