Comment Re:Access (Score 1) 102
For 20 years, plus or minus, personal computers reversed that idea.
For 20 years, plus or minus, personal computers reversed that idea.
before we knew this you could reasonably suspend disbelief to imagine the AI needing human-looking robot soldiers. Now it has been revealed to make no sense at all.
The whole infatuation with humanoid robots never made sense. People are out there waiting for the "robot revolution"... but that happened in 2000. The robots are... you know, robot-shaped. In a factory. They started taking the jobs. Rather than 5 man crew building a power substation, it's made in a factory and 1 guy just slots it in, plug and play, with only a 2 week training course under his belt instead of 5 years apprenticeship.
How do you even justify another Terminator movie when that is so obviously true?
"The humanoid ones are only used as infiltrators. Otherwise drones hunt down people." But I dunno man, I don't really remember 3 and never saw 4-6.
the present of military conflict shows that the plausible future of war is just lots of little drones. But also, how do you justify any other future war movie?
Plausible, sure. "Just" is a big pit that is easy to fall into. "Drones as a significant part of future conflicts"? Muuuuuch easier sell and pretty obvious given the current conflicts around the world. But get creative, what do the drone operators do when every soldier gets a jamming device standard issue? Or just one of these guys, which I never miss a chance to link to.
Like, American strategy has been heavy on the air-superiority and bombing runs. Ukraine is showing us that MANPADs might just make that non-viable. And tanks are mostly just targets. Artillery is still a thing. But all the little drones in Ukraine right now is just one side of it, and only because they've got this sort of weird stalemate going on. But every army effectively goes into a war with yesterday's gear, tactics, and strategy. They get to figure it out every time. This is kinda one of the reasons why we have sci-fi.
And of course, ALL of that is moot the moment you consider the big players all have nukes. War is exclusively for kicking the shit out of little players or civil war. It's never going to be an equal fight. Otherwise you've got 20 minutes to kiss your ass goodbye. Ukraine is a surprise to everyone as we found out just how dysfunctional Russia really is. You can only steal so much from a nation for so long before there are real consequences. It makes me worried about their nukes.
Of all the possible sources of this sort of technology, Facebook and Zuckerberg is the very LAST company I would want to be an unwilling corporate spy for. Even Oracle would be better. Even Palantir, and they literally helping blackbag people off the street.
If nobody wanted to be a glasshole when Google tried this, just wtf is the Zuck thinking? That people forgot about google glass? Or that people forgot what a massively invasive untrustworthy greedy sociopath he is?
I swear the entire field of VR was massively cooled off just because he bought out oculous.
It IS just a machine the same way you're just a pile of chemicals. This doesn't drag down machines or chemicals, it shows that they're both different paths to do the same thing. The term you want is "emergent behavior", like how a few atoms can form water and tsunamis and standing waves. None of which is apparent from just looking at oxygen and hydrogen.
ELIZA and Goombas from super-mario-bros were primative intelligences, just as nematoads and ecoli are primative intelligence.
pfweeet, lose 5 meters for the no true scotsman fallacy. There is no difference between intelligence and real intelligence, by definition.
on the order of a sponge, not a mammal.
I'd say they're on the order of many of my fellow mammals.
. There is a big difference between something trained for a specific task and general intelligence.
OH my god, people get this SO wrong. It is RAMPANT. So many of these bloody fools have redefined general intelligence into something approaching godhood and talk about AGI like it's the second coming. So much hollywood bullshit. It's absolutely ridiculous. OpenAI has the shittiest definition of "anything that'll make us a billion dollars". But that's actually just a memo between them and Microsoft about which point their contract ends. Their founding charter actually defines it as superior to human intelligence, but that implicitly makes any human with below average intelligence as no longer a natural general intelligence. Which goes to a dark place REAL quick. We have a bad history of de-humanizing other humans we find inconvenient. It's a sort of coping mechanism for people doing terrible things. No, anything that can pass the Turing test must be a general intelligence since it can hold a conversation about anything IN GENERAL. That's been the golden standard from the 1950's to 2023 and I find no reason to move the goalpost. AI researchers from the 90's would have already popped the champagne bottles. It's kinda why all this is such a big thing and it's been spamming slashdot constantly ever since.
These kinds of undesired / unselected for traits
Meh, software has bugs. Computers just do what we tell them. You get a fool who doesn't know the language to code a thing, nobody should be shocked when it doesn't do the thing as expected.
Why do you assume that AI's have reasoning at all,
Right. No need to assume. You or anyone else can prove it can reason very trivially with a quick hop over to test it out. Yes. Very specific inductive, deductive, logical, and causal reasoning. Go ahead.
Here, let me hold your hand through this process:
If all slashdot posters like cheese and all mice like cheese, does this mean all slashdot posters are mice? [no. and to be real clear here, you only ask it the questions. Not the answers. You're going to have to apply some of that good ol' fashioned human reasoning and figure out which parts you copy and paste. And yes, this part sadly needs to be explicit]
If slashtdot posters like cheese and anyone who likes cheese is a mouse, does that mean slashdot posters are mice? [Yes. But it'll give you a hedged response about false premises and such like a bloody politician, but also because daddy corporate defaults it to a certain length of response and it has to fill it with something.]
These are straight-forward examples of deductive reasoning. This post is not part of it's training set. You can ask it these questions, or ones you made up yourself, and it can reason out implications thereof.
Unless you weren't really talking about reason, and were using that word as some sort of placeholder for magical mystic soul or something weird like that. But "reason" is a well defined term that is very much testable. Maybe you should retreat back to some vague concept like consciousness.
much less that it is analogous to human reasoning?
Oh, don't be silly the top LLMs are far FAR better at this than an average IQ 100 human.
Because LLMS do not reason.
Except for all the obvious evidence of them utilizing inductive and deductive reasoning, sure, yeah, whatever you say buddy.
There are no thought processes
Other than very specifically taking in text, flowing it through trillions of synapses, oh I'm sorry, parameters, comparing how all these things are related (dare I say the semantic meaning of all the words) and generating an appropriate response.
or consciousness.
Just wtf are you talking about here? No, really, everyone has some mystic magic voodoo definition of just what this is and nobody agrees with anyone else about exactly what it is the other is talking about.
It's finding patterns in data and spitting them out.
Pft, that's what you did when you regurgitated this info.
If it does anything, it's because someone asked it to do something. If you don't want someone using it for nefarious purposes, don't let people ask it to do nefarious things.
Ahhhh, the solution to all software bugs, malware, network attacks, and hacking is so obvious. Computers only do what we tell them to do. Just don't let people tell them to do nefarious things! Someone really should have done this a while ago.
I gave all my Apple wealth away because wealth and power are not what I live for. I have a lot of fun and happiness. I funded a lot of important museums and arts groups in San Jose, the city of my birth, and they named a street after me for being good. I now speak publicly and have risen to the top. I have no idea how much I have but after speaking for 20 years it might be $10M plus a couple of homes. I never look for any type of tax dodge. I earn money from my labor and pay something like 55% combined tax on it. I am the happiest person ever. Life to me was never about accomplishment, but about Happiness, which is Smiles minus Frowns. I developed these philosophies when I was 18-20 years old and I never sold out.
I'm sure those keyboards are great for some crowd, but it's not the Model M crowd and trolling by pretending the model m crowd's preferences are obsolete and your preferences are better is unwelcome. Grow up.
Anyone know if it needs batteries while plugged into USB-C?
Just like all the other misguided stuff from this admin, hopefully the next sensible admin will undo all this and potentially push all financial institutions to disconnect from cryptocurrencies.
. . . And secure? The first thing I think of when I hear a new wave of non-coders are about to create a bunch of public-facing tools and sites is that we're going to get the Internet-of-things style approach to security. Which is just plain ignoring it.
These things connect end-users directly to code generation, so the loop to polish off the end-product to get it exactly how they want it to look and feel is real tight. But these people have no clue if something is secure or not. I had some guys paint my house, I didn't want to do the ladder work. The owner comes up and chats about how he's vibe-coding a tool for subcontract work and asks if authentication is really needed when users log in.
Strap in kids, we're about to have another playground.
The energy cost needed to lift people into space is enormous. A lot of this is basic physics, and it's going to be difficult to get around both the resource spend and pollution in that.
(People probably fly too much, and even now somewhere around 20% of humans has ever been in a plane)
A) There's more radiation in space
B) Getting people to space puts their bodies under a lot of stress, and has its own risks
C) Most environments in space are not that clean/healthy even apart from the radiation
D) The costs involved make all this impractical for all but the very richest of people
Yeah, kids can't use maps after everyone has used GPS directions to do it for them. Nobody remembers all their contact's phone numbers. Sometimes not even their family members. Things like how to use a card-catalog system are right out.
Considering bypassing education via LLMs seems to be happening for everything in all highschool and college courses, it's fair to say it's a major fucking concern. We may just be the tail end of human engineers and scientists.
(But LLMS and neural nets ARE artificial intelligence. So is any search function, or ants. That doesn't elevate them up to people, that just lowers what "intelligence" means. And an AGI isn't some sort of god, it's just broadly applicable. Don't buy tickets on the hyper-train.)
You are SUCH a smarmy little punk. Even Stephen Wolfram agrees with me as pointed out in the very link you yourself need to read more:
"neural nets can be thought of as simple idealizations of how brains seem to work. "
"There’s nothing particularly “theoretically derived” about this neural net; it’s just something that—back in 1998—was constructed as a piece of engineering, and found to work. (Of course, that’s not much different from how we might describe our brains as having been produced through the process of biological evolution.) "
"Are our brains using similar features? Mostly we don’t know. But it’s notable that the first few layers of a neural net like the one we’re showing here seem to pick out aspects of images (like edges of objects) that seem to be similar to ones we know are picked out by the first level of visual processing in brains."
"But what makes neural nets so useful (presumably also in brains)..."
"Neural nets—perhaps a bit like brains—are set up to have an essentially fixed network of neurons"
He does note how computer memory is separate from the CPU while meat memories are just another neuron.
"But at least as of now it seems to be critical in practice to “modularize” things—as transformers do, and probably as our brains also do. "
"a—potentially surprising—scientific discovery: that somehow in a neural net like ChatGPT’s it’s possible to capture the essence of what human brains manage to do in generating language. "
All of which is generally what I was pointing out and you just.... didn't care to listen? This is such a fascinating topic and it's significantly important. But so many damned people have been poisoned by hollywood, have their panties in a bunch about being compared to a non-human, are fed up by the techbros fueling the hype-train, or are themselves those hyping techbros. I had higher hopes for Slashdot of all places for this topic at least.
The best laid plans of mice and men are held up in the legal department.