Comment Re:seems excessive (Score 1) 79
I know what that would do to my morale.
I know what that would do to my morale.
The story "Q. U. R." had an inventor simplify robots which were going insane from having humanoid features they had no use for. 1943.
Sounds like an emotional reason, so maybe that is the right answer. Often people don't care to admit the real reason as that's a different part of the brain. But who knows.
Everyone now focus on compromising the device instead.
But what if how the world really works differs from the intention of the policy?
Foxconn renames two factories in Zhengzhou to "Vietnam" and "India". Chinese government confirms by assigning territorial sovereignty of the loading docks to the respective countries.
The word communication gets vastly overused, as if the environment will force you to communicate and that'll just work.
What's really meant is empathy, intelligence, and buying into the values and goals and attitudes.
If people are already compatible then you DO NOT NEED TO BE IN THE SAME ROOM. You'll naturally find each other online or on calls as and when needed.
If you have to force people into the same airspace in order to get white collar work done then you don't have a team, you have reluctant and undriven and incompatible people.
Wow. I'm amazed anyone who's spent any time looking into the lives of people in Europe and Asia could believe this. Oh wait, you've never done that, have you?
Please keep sharing anything you observe as it happens.
The last thing any of us should want is for OpenAI to take over from Google.
I really don't understand this decision that Google should be broken up as though its 'monopoly' in search isn't entirely based on skill and talent. But if we *are* going to force companies to break up into components, can we make sure new monoliths aren't just created as a result?
This, entirely this.
That's an extreme example. How about an ordinary example? She seems great, and I feel in love (factually I don't know her whole life story, nor whether she has the genes which make her prone to addiction, nor what psychological damage is lurking in her sub personalities, nor whether she will have fertility problems etc.)
My point is we rely on narratives far more maybe than we realise. Which is why everyone who wants to manipulate, can just focus on using narratives. Hence more "valuable".
That's how I use it and I find it works really well as a lossy search engine.
The brain is not one model trying to do everything.
I think maybe the big mistake that's been made is imagining that intelligence is just one model.
You only have to introspect a bit to realize that when you go through your day you're flipping and changing states between what I guess would be different intelligences.
It probably makes no sense to think of the brain as being a single model.
We have lots and lots of models which are all "trained" or good in some way at doing certain things.
We have mathematical intelligence, emotional intelligence, physical intelligence, and so on.
That's probably because the brain has all these different specialist regions, and there's some interesting new work on the left and right hemispheres as a whole representing two entirely different modes of attention -- ways of attending to the world.
(The old left-right brain thing apparently got it wrong but the new research thinks they've got the real answer.)
I think AIs should be designed as highly specialist models which are really good at doing specific things.
I'm sure it has an uncanny ability to recognise patterns where humans can't see them, given enough training.
Maybe these models are breaking down because they're trying to bring together too many disparate things and they lose structure because there is no one structure which can do them all.
Specialist models with specialist real world problems. The AI "apps".
Narratives are often more valuable than facts. Why did you marry X rather than Y? Why buy brand A rather than B? Why vote for J rather than K? Etc. Most of our lives are a continuous need to make meaning. That's why we're so easy to manipulate.
If material could never be reproduced (reading and remembering) then the material would be worthless to everyone. But if it could always be reproduced with no benefit to its creators, then they could not feed themselves and survive. Where to set the balance is full of detail and difficulty.
LLMs may well need their own special rules. For example, I for one gave up my O'Reilly subscription because now I can get most of that quickly looking up the basics of some tech thing quite quickly from an LLM -- so somehow the AI companies are benefiting and the original publishers are suffering. Those kinds of balances need to be addressed.
Doubt isn't the opposite of faith; it is an element of faith. - Paul Tillich, German theologian and historian