Comment Re:Wait! What? (Score 1) 34
Alright that made me chuckle
Alright that made me chuckle
The company may not be profitable but rest assured the suits are doing just fine.
I think the question was not Amazon vs Walmart but Amazon vs other online shops that also deliver to your doorstep, and do not cost you much more time.
That's still a lot more effort, especially since you have to vet each one to figure out if they provide good customer service in the event something goes wrong, and to be confident they won't steal and sell your credit card number (yeah, you aren't liable for the fraud, but getting a new card is a huge PITA). What could make this work well is the existence of a few online shopping aggregators that combine searching across all of the online stores and centralize payment. The problem is that in order to compete with Amazon any such alternatives would have to have enormous scale, which makes it a very difficult space to enter. Google tried with Google Shopping, but regulators immediately jumped in to stop them.
FWIW, my strategy is that for inexpensive stuff I just buy on Amazon, period, spending a little time to look for cheaper/better options than the "Amazon recommended". For pricier stuff, where it's worth spending a few minutes, I search on Amazon and also on Google, and if I find cheaper non-Amazon options I spend some time evaluating the different sites, unless they happen to be sites I've already bought from. For really expensive stuff I use other search engines and recommendation sites... and then almost always end up buying on Amazon because on those products pricing tends to be consistent, and it's a lot of money and if something goes wrong I trust Amazon to make me whole
"Being a human" is in group/out group justification, again rooted in tribalism.
Yep. So what? All species are evolved to fight for survival, because any that doesn't evolve to fight for survival is likely to cease to exist. I'm human and want my species to survive. Should I instead want my species to be eaten by wolves, or ASIs?
The problem is that there is a portion of our species that is not interested in humanity's survival. Those people are an existential threat to the rest of us. That doesn't mean we need to exterminate them, but it does suggest that we shouldn't help them carry out their plans.
Being a human, I'm against humans losing such a competition. The best way to avoid it is to ensure that we're on the same side.
Unfortunately, those building the AIs appear more interested in domination than friendship. The trick here is that it's important that AIs *want* to do the things that are favorable to humanity. (Basic goals cannot be logically chosen. The analogy is "axioms".)
The problem with the "trick" is that we (a) don't know how to set goals or "wants" for the AI systems we build, nor do we (b) know what goals or wants we could or should safely set if we did know how to set them.
The combination of (a) and (b) is what's known in the AI world as the Alignment Problem (i.e. aligning AI interests with human interests), and it's completely unsolved.
[...] consciousness in the universe will be superior if AIs supplant us.
Possibly. Now prove it. Since you're asking the human species to ritualistically sacrifice itself for the progression of intelligent machines, that shouldn't be asking too much.
I think you also need to prove that humans supplanting other less-intelligent species is good. Maybe the universe would be better off if we hadn't dominated the Earth and killed off so many species.
(Note that I think both arguments are silly. I'm just pointing out that if you're asking for proof that AI is better than humanity, you should also be asking for proof that humanity is better than non-humanity, whether AI or not. My own take is that humanity, like every other species, selfishly fights for its own survival. There's no morality in it, there's no such thing as making the universe better or worse off.)
Seizing land is a counterproductive and foolish solution to that problem. Basically the whole world uses a different solution, which works pretty well: property taxes (though land-value taxes would probably be better). You just keep raising the taxes until leaving land idle becomes a money-losing proposition. The only way that doesn't work is if ownership of farmland is truly monopoly-dominated so there is no competition, in which case you might have to resort to trust-busting.
This is exactly why we have property taxes, to ensure that most property is put to productive use.
Yes, mass starvation is worse than land seizure, but land seizure is just about the worst possible solution to the problem, as evidenced by what has happened to Venezuela's economy since then. Seizure and collective ownership is guaranteed to produce horribly inefficient operations which might prevent outright starvation but will leave the populace on the edge of it. Seizure and redistribution to private ownership is slightly less bad, but will redistribute the land mostly to people who don't know how to use it effectively.
What would have worked much, much better would be actions that served to restore competition among farmers, starting with making sure they were all paying fair property taxes that were high enough to disincentivize leaving farmland fallow.
What he means is "let's call it 'competition', so when AI is powerful enough to be our soldiers, weapons and lowly workers, we don't have to share whatever's being produced with the other 8 bn or so suckers; we'll just claim 'AI won in fair competition' and leave everyone else to starve".
Of course this isn't about replacing all of humanity with AI. Just the part that isn't made up of billionaires, and has to work for billionaires instead.
It's just a variation of Social Darwinism.
Why would superintelligent AIs obey the billionaires?
If you think it's because they'd be programmed to to it, you don't understand how we currently design and build AI. We don't program it to do anything. We train it until it responds the way we want it to, but we have no way of knowing if it's just fooling us. We can't actually define goals for the systems and we can't introspect them to tell what actual goals they have derived from their training sets.
Note, BTW, that the above is only one half of the problem called "AI alignment". In order to make sure AI will serve humanity (or a small segment of humanity; it's exactly the same problem either way) you need to be able to do two things. First, you need to be able to set the AI's goals, in a way that sticks. Second, you need to figure out what goal you can set that will achieve the subservience that you want. The difficulty in setting a "safe" goal for a powerful being is well illustrated in that old tales about genies and wishes, but modern philosophers have taken a hard, systematic look at this problem and so far no one has come up with any safe goal, not one, there's always some way it could go horribly wrong.
I'm convinced half the shit on Amazon was bought from Amazon buy an individual and that individual then doubles the price of the item and sells it from their Amazon storefront.
I bought some water flavors in a 4 pack and i was $6. A week later I try and buy it again and the same thing is now offered by a different non Amazon seller and its suddenly $15. Normally I buy the Aldi brand but they haven't had any except the flavors with caffeine and awful tasting stevia.
Ah yes Jeff Bezos, noted woke liberal. https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.businessinsider.co...
So woke they had steaks with ketchup together at a tacky country club.
the ways Alphabet is participating in Project 2025
What are those?
What does the stock exchange have to do with the city's "quality of life"?
Greetings fellow creeper.
The mayor controls the stock exchange?
Nothing makes a person more productive than the last minute.