Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:Could we "pull the plug" on networked computers (Score 1) 68

Good point on the "benevolent dictator fantasy". :-) The EarthCent Ambassador Series by E.M. Foner delves into that big time with the benevolent "Stryx" AIs.

I guess most of these examples from this search fall into some variation of your last point on "scared fool with a gun" (where for "gun" substitute some social process that harms someone, with AI being part of a system):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fduckduckgo.com%2F%3Fq%3Dexam...

Example top result:
"8 Times AI Bias Caused Real-World Harm"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.techopedia.com%2Ftim...

Or something else I saw the other day:
"'I was misidentified as shoplifter by facial recognition tech'"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.bbc.co.uk%2Fnews%2Ftec...

Or: "10 Nightmare Things AI And Robots Have Done To Humans"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.buzzfeed.com%2Fmikes...

Sure, these are not quite the same as "AI-powered robots shooting everyone. The fact that "AI" of some sort is involved is incidental compared to just computer-supported-or-even-not algorithms as have been in use for decades like to redline sections of cities to prevent issuing mortgages.

Of course there are example of robots killing people with guns, but they are still unusual:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ftheconversation.com%2Fan...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.npr.org%2F2021%2F06%2F01...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.reddit.com%2Fr%2FFutur...
https://f6ffb3fa-34ce-43c1-939d-77e64deb3c0c.atarimworker.io/story/07/...

These automated machine guns have potential to go wrong, but I have not heard yet that one has:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...
"The SGR-A1 is a type of autonomous sentry gun that was jointly developed by Samsung Techwin (now Hanwha Aerospace) and Korea University to assist South Korean troops in the Korean Demilitarized Zone. It is widely considered as the first unit of its kind to have an integrated system that includes surveillance, tracking, firing, and voice recognition. While units of the SGR-A1 have been reportedly deployed, their number is unknown due to the project being "highly classified"."

But a lot of people can still get hurt by AI acting as a dysfunctional part of a dysfunctional system (the first items).

Is there money to be made by fear mongering? Yes, I have to agree you are right on that.

Is *all* the worry about AI profit-driven fear mongering -- especially about concentration of wealth and power by what people using AI do to other people (like Marshall Brain wrote about in "Robotic Nation" etc)?

I think there are legitimate (and increasing concerns) similar and worse than the ones, say, James P. Hogan wrote about. Hogan emphasized accidental issues of a system protecting itself -- and generally not issues from malice or social bias things implemented in part intentionally by humans. Although one ending of a "Giants" book (Entoverse I think, been a long time) does involve AI in league with the heroes doing unexpected stuff by providing misleading synthetic information to humorous effect.

Of course, our lives in the USA have been totally dependent for decades on 1970s era Soviet "Dead Hand" technology that the US intelligence agencies tried to sabotage with counterfeit chips -- so who knows how well it really works. So if you have a nice day today not involving mushroom clouds, you can (in part) thank a 1970s Soviet engineer for safeguarding your life. :-)
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...

It's common to think the US Military somehow defends the USA, and while there is some truth to that, it leaves out a bigger part of the picture of much of human survival being dependent on a multi-party global system working as expected to avoid accidents...

Two other USSR citizens we can thank for our current life in the USA: :-)

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...
"a senior Soviet Naval officer who prevented a Soviet submarine from launching a nuclear torpedo against ships of the United States Navy at a crucial moment in the Cuban Missile Crisis of October 1962. The course of events that would have followed such an action cannot be known, but speculations have been advanced, up to and including global thermonuclear war."

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...
"These missile attack warnings were suspected to be false alarms by Stanislav Petrov, an engineer of the Soviet Air Defence Forces on duty at the command center of the early-warning system. He decided to wait for corroborating evidence--of which none arrived--rather than immediately relaying the warning up the chain of command. This decision is seen as having prevented a retaliatory nuclear strike against the United States and its NATO allies, which would likely have resulted in a full-scale nuclear war. Investigation of the satellite warning system later determined that the system had indeed malfunctioned."

There is even a catchy pop tune related to the last item: :-)
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...
"The English version retains the spirit of the original narrative, but many of the lyrics are translated poetically rather than being directly translated: red helium balloons are casually released by the civilian singer (narrator) with her unnamed friend into the sky and are mistakenly registered by a faulty early warning system as enemy contacts, resulting in panic and eventually nuclear war, with the end of the song near-identical to the end of the original German version."

If we replaced people like Stanislav Petrov and Vasily Arkhipov with AI will we as a global society be better off?

Here is a professor (Alain Kornhauser) I worked with on AI and robots and self-driving cars in the second half of the 1980s commenting recently on how self-driving cars are already safer than human-operated cars by a factor of 10X in many situations based on Tesla data:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

But one difference is that there is a lot of training data based on car accidents and safe driving to make reliable (at least better than human) self-driving cars. We don't have much training data -- thankfully -- on avoiding accidental nuclear wars.

In general, AI is a complex unpredictable thing (especially now) and "simple" seems like a prerequisite for reliability (for all of military, social, and financial systems):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.infoq.com%2Fpresenta...
"Rich Hickey emphasizes simplicityâ(TM)s virtues over easinessâ(TM), showing that while many choose easiness they may end up with complexity, and the better way is to choose easiness along the simplicity path."

Given that we as a society are pursuing a path of increasing complexity and related risk (including of global war with nukes and bioweapons, but also other risks), that's one reason (among others) that I have advocated for at least part of our society adopting simpler better-understood locally-focused resilient infrastructures (to little success, sigh).
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fprincet...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fsunrise...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fkurtz-fernhout.com%2Fosc...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Frecogni...

Example of related fears from my reading too much sci-fi: :-)
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fkurtz-fernhout.com%2Fosc...
"The race is on to make the human world a better (and more resilient) place before one of these overwhelms us:
Autonomous military robots out of control
Nanotechnology virus / gray slime
Ethnically targeted virus
Sterility virus
Computer virus
Asteroid impact
Y2K
Other unforseen computer failure mode
Global warming / climate change / flooding
Nuclear / biological war
Unexpected economic collapse from Chaos effects
Terrorism w/ unforseen wide effects
Out of control bureaucracy (1984)
Religious / philosophical warfare
Economic imbalance leading to world war
Arms race leading to world war
Zero-point energy tap out of control
Time-space information system spreading failure effect (Chalker's Zinder Nullifier)
Unforseen consequences of research (energy, weapons, informational, biological)"

So, AI out of control is just one of those concerns...

So, can I point to multiple examples of AI taking over planets to the harm of their biological inhabitants (outside of sci-fi). I have to admit the answer is no. But then I can't point to realized examples of accidental global nuclear war either (thankfully, so far).

Comment Re:Could we "pull the plug" on networked computers (Score 1) 68

Thanks for the insightful replies. You're right that fiction can bee to optimistic. Still, it can be full of interesting ideas -- especially when someone like James P. Hogan with a technical background and also in contact with AI luminaries (like Marvin Minsky) writes about AI and robotics.

From the Manga version of "The Two Faces of Tomorrow":

"The Two Faces of Tomorrow: Battle Plan" where engineers and scientists see how hard it is to turn off a networked production system that has active repair drones:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fmangadex.org%2Fchapter%2F3...

"Pulling the Plug: Chapter 6, Volume 1, The Two Faces of Tomorrow" where something similar happens during an attempt to shut down a networked distributed supercomputer:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fmangadex.org%2Fchapter%2F4...

Granted, those are systems that have control of robots. But even without drones, consider:
"AI system resorts to blackmail if told it will be removed"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.bbc.com%2Fnews%2Fartic...

I first saw a related idea in "The Great Time Machine Hoax" from around 1963, where a supercomputer uses only printed letters with enclosed checks sent to companies to change the world to its preferences. It was insightful even back then to see how a computer could just hijack our social-economic system to its own benefit.

Arguably, modern corporation are a form of machine intelligence even if some of their components are human. I wrote about this in 2000:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fdougengelbart.org%2Fcoll...
"These corporate machine intelligences are already driving for better machine intelligences -- faster, more efficient, cheaper, and more resilient. People forget that corporate charters used to be routinely revoked for behavior outside the immediate public good, and that corporations were not considered persons until around 1886 (that decision perhaps being the first major example of a machine using the political/social process of its own ends). Corporate charters are granted supposedly because society believe it is in the best interest of *society* for corporations to exist. But, when was the last time people were able to pull the "charter" plug on a corporation not acting in the public interest? It's hard, and it will get harder when corporations don't need people to run themselves."

So, as another question, how easily can we-the-people "pull the plug" on corporations these days? I guess there are examples (Theranos?) but they seem to have more to do with fraud -- rather than a company found pursuing the ideal of capitalism of privatizing gains while socializing risks and costs.

It's not like, say, OpenAI is going to suffer any more consequences than the rest of us if AI kills everyone. And meanwhile, the people involved in OpenAI may get a lot of money and have a lot of "fun". From "You Have No Idea How Terrified AI Scientists Actually Are" at 2:25 (for some reason that part is missing from the YouTube automatic transcript):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
"Sam Altman: AI will probably lead to the end of the world but in the meantime there will be great companies created with serious machine learning."

Maybe we don't have an AI issue as much as a corporate governance issue? Which circles around to my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

Comment Re:asking for screwups (Score 1) 111

How would an LLM accurately determine which cases were "easy"? They don't reason, you know. What they do is useful and interesting, but it's essentially channeling: what is in its giant language model is the raw material, and the prompt is what starts the channeling. Because its dataset is so large, the channeling can be remarkably accurate, as long as the answer is already in some sense known and represented in the dataset.

But if it's not, then the answer is just going to be wrong. And even if it is, whether the answer comes out as something useful is chancy, because what it's doing is not synthesis—it's prediction based on a dataset. This can look a lot like synthesis, but it's really not.

Comment Could we "pull the plug" on networked computers? (Score 1) 68

I truly wish you are right about all the fear mongering about AI being a scam.

It's something I have been concerned about for decades, similar to the risk of nuclear war or biowarfare. One difference is that nukes and to a lesser extent plagues are more clearly distinguished as weapons of war and generally monopolized by nation-states -- whereas AI is seeing gradual adoption by everyone everywhere (and with a risk unexpected things might happen overnight if a computer network "wakes up" or is otherwise directed by humans to problematical ends). It's kind of like cars -- a generally useful tool -- could be turned turn into nukes overnight by a network software update (which they can't, thankfully). But how do you "pull the plug" on all cars -- especially if a transition from acting as a faithful companion to a "Christine" killer car happens overnight? Or even just all home routers or all networked smartphones get compromised? ISPs could put in place filtering in such cases, but how long could such filters last or be effective if the AI (or malevolent humans) responds?

If you drive a car with high-tech features, you are "trusting AI" in a sense. From 2019 on how AI was then already so much in our lives:
"The 10 Best Examples Of How AI Is Already Used In Our Everyday Life"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.forbes.com%2Fsites%2Fb...

A self-aware AI doing nasty stuff is likely more of a mid-to-long-term issue though. The bigger short-term issue is what people using AI do to other people with it (especially for economic disruption and wealth concentration, like Marshall Brain wrote about).

Turning off aspects of a broad network of modern technology have been explored in books like "The Two Faces of Tomorrow" (from 1979 by James P. Hogan). He suggests that turning off a global superintelligence network (a network that most people have come to depend on, and which embodies AI being used to do many tasks) may be a huge challenge (if not an impossible one). He suggested a network can gets smarter over time and unintentionally develop a survival instinct as a natural aspect of it trying to remain operation to do its purported primary function in the face of random power outages (like from lightning strikes).

But even if we wanted to turn off AI, would we? As a (poor) analogy, while there have been brief periods where the global internet supporting the world wide web has been restricted in some specific places, and also there is some selective filtering of the internet in various nations continuously ongoing (usually to give preference to local national web applications), could we be likely to turn off the global internet at this point even if it was proven somehow to greatly produce harms? We are so dependent on the internet for day-to-day commerce as well as, sigh, entertainment (i.e. so much "news") that I can wonder if such is even possible now collectively. The issue there is not technical (yes, IT server farm administrators and individual consumers with home PCs and smartphones could turn off every networked computer in theory) but social (would people do it).

Personally, I see value in many of the points Michael Greer makes in "Retrotopia" (especially about computer security, and also about chosen levels of technology as a form of technological "zoning"):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ftheworthyhouse.com%2F202...
"To maintain autarky, and for practical and philosophical reasons we will turn to in a minute, Lakeland rejects public funding of any technology past 1940, and imposes cultural strictures discouraging much private use of such technology. Even 1940s technology is not necessarily the standard; each county chooses to implement public infrastructure in one of five technological tiers, going back to 1820. The more retro, the lower the taxes. ... This is all an attempt to reify a major focus of Greer, what he calls âoedeliberate technological regression.â His idea is that we should not assume newer is better; we should instead âoemineâ the past for good ideas that are no longer extant, or were never adopted, and resurrect them, because they are cheaper and, in the long run, better than modern alternatives, which are pushed by those who rely on selling us unneeded items with planned obsolescence."

But Greer's novel still seems like a bit of a fantasy to suggest that a big part of the USA would willingly abandon networked computers in the future (even in the face of technological disasters) -- and even if it indeed might produce a better life. There was a Simpson's episode where everyone abandons TV for an afternoon and loves it, and then goes back to watching TV. It's a bit like saying a drug addict would willingly abandon a drug; some do of course, especially if the rest of the life improves in various ways for whatever reasons.

Also some of the benefit in Greer's novel comes from choosing decentralized technologies (whatever the form) in preference to more-easily centralized technologies (which is a concentration-of-wealth point in some ways rather than a strictly technological point). Contrast with the independent high-tech self-maintaining AI cybertanks in the Old Guy Cybertank novels who have built a sort-of freedom-emphasizing yet cooperative democracy (in the absence of humans).

In any case, we are talking about broad social changes with the adoption of AI. There is no single off switch for a network composed of billions of individual computers distributed across the planet -- especially if everyone has networked AI in their cars and smartphones (which is increasingly the case).

Comment Re:"You Have No Idea How Terrified AI Scientists A (Score 1) 68

Yoshua Bengio is at least trying to do better (if one believe such systems need to be rushed out in any case):
"Godfather of AI Alarmed as Advanced Systems Quickly Learning to Lie, Deceive, Blackmail and Hack
"I'm deeply concerned by the behaviors that unrestrained agentic AI systems are already beginning to exhibit.""
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ffuturism.com%2Fai-godfat...
"In a blog post announcing LawZero, the new nonprofit venture, "AI godfather" Yoshua Bengio said that he has grown "deeply concerned" as AI models become ever more powerful and deceptive.
        "This organization has been created in response to evidence that today's frontier AI models have growing dangerous capabilities and [behaviors]," the world's most-cited computer scientist wrote, "including deception, cheating, lying, hacking, self-preservation, and more generally, goal misalignment." ...
      A pre-peer-review paper Bengio and his colleagues published earlier this year explains it a bit more simply.
      "This system is designed to explain the world from observations," the paper reads, "as opposed to taking actions in it to imitate or please humans."
      The concept of building "safe" AI is far from new, of course -- it's quite literally why several OpenAI researchers left OpenAI and founded Anthropic as a rival research lab.
      This one seems to be different because, unlike Anthropic, OpenAI, or any other companies that pay lip service to AI safety while still bringing in gobs of cash, Bengio's is a nonprofit -- though that hasn't stopped him from raising $30 million from the likes of ex-Google CEO Eric Schmidt, among others."

Yoshua Bengio seems like someone at least trying to make AI (scientists) from a cooperative abundance perspective rather than to create more competitive AI agents.

Of course, even that could go horribly wrong if the AI misleads people subtly.

From 1957: "A ten-year-old boy and Robby the Robot team up to prevent a Super Computer [which provided misleading outputs] from controlling the Earth from a satellite."
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.imdb.com%2Ftitle%2Ftt0...

Form 1992: "A Fire Upon the Deep" on an AI that misleads people exploring an old archive who though their exploratory AI work was airgapped and firewalled as they built advanced automation the AI suggested:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...

Lots of other sc-fi examples of deceptive AI (like in the Old Guy Cybertank series, and more). The worst being along the lines of a human (e.g. Dr. Smith of "Lost in Space") intentionally programming the AI (or Ai-powered Robot) to be harmful to others to that person's intended benefit.

Or sometimes (like in a Bobiverse novel, spoiler) a human may bypass a firewall and unleash an AI out of a sense of worshipful goodwill, to unknown consequences.

But at least the AI Scientist approach of Yoshua Bengio is not *totally* stupid in the way a reckless race to create competitive commercial super-intelligent AIs otherwise is for sure.

Some dark humor on that (with some links fixed up):
https://f6ffb3fa-34ce-43c1-939d-77e64deb3c0c.atarimworker.io/comments....
====
[People are] right to be skeptical on AI. But I can also see that it is so seductive as a "supernormal stimuli" it will have to be dealt with one way or another. Some AI-related dark humor by me.
* Contrast Sergey Brin this year:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ffinance.yahoo.com%2Fnews...
""Competition has accelerated immensely and the final race to AGI is afoot," he said in the memo. "I think we have all the ingredients to win this race, but we are going to have to turbocharge our efforts." Brin added that Gemini staff can boost their coding efficiency by using the company's own AI technology.
* With a Monty Python sketch from decades ago:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fgenius.com%2FMonty-pytho...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
"Well, you join us here in Paris, just a few minutes before the start of today's big event: the final of the Mens' Being Eaten By A Crocodile event. ...
          Gavin, does it ever worry you that you're actually going to be chewed up by a bloody great crocodile?
        (The only thing that worries me, Jim, is being being the first one down that gullet.)"
====

Comment Re:"You Have No Idea How Terrified AI Scientists A (Score 1) 68

If people shift their perspective to align with the idea in my sig or similar ideas from Albert Einstein, Buckminster Fuller, Ursula K Le Guin, James P. Hogan, Lewis Mumford, Donald Pet, and many others, there might be a chance for a positive outcome from AI: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

That is because our direction out of any singularity may have something to do with our moral direction going into it. So we desperately need to build a more inclusive, joyful, and healthy society right now.

But if we just continue extreme competition as usual between businesses and nations (especially for creating super-intelligent AI), then we are likely "cooked":
"it's over, we're cooked!" -- says [AI-generated] girl that literally does not exist (and she's right!)
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.reddit.com%2Fr%2Fsingu...

As just as one example, here is Eric Schmidt essentially saying that we are probably doomed if AI is used to create biowarfare agents (which it almost certainly will be if we don't change our scarcity-based perspective on using these tools of abundance):
"Dr. Eric Schmidt: Special Competitive Studies Project"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

Alternatives: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Frecogni...
"There is a fundamental mismatch between 21st century reality and 20th century security [and economic] thinking. Those "security" [and economic] agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all."

And also: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fbeyond-...
"This article explores the issue of a "Jobless Recovery" mainly from a heterodox economic perspective. It emphasizes the implications of ideas by Marshall Brain and others that improvements in robotics, automation, design, and voluntary social networks are fundamentally changing the structure of the economic landscape. It outlines towards the end four major alternatives to mainstream economic practice (a basic income, a gift economy, stronger local subsistence economies, and resource-based planning). These alternatives could be used in combination to address what, even as far back as 1964, has been described as a breaking "income-through-jobs link". This link between jobs and income is breaking because of the declining value of most paid human labor relative to capital investments in automation and better design. Or, as is now the case, the value of paid human labor like at some newspapers or universities is also declining relative to the output of voluntary social networks such as for digital content production (like represented by this document). It is suggested that we will need to fundamentally reevaluate our economic theories and practices to adjust to these new realities emerging from exponential trends in technology and society."

See also "The Case Against Competition" by Alfie Kohn:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.alfiekohn.org%2Farti...
"This is not to say that children shouldn't learn discipline and tenacity, that they shouldn't be encouraged to succeed or even have a nodding acquaintance with failure. But none of these requires winning and losing -- that is, having to beat other children and worry about being beaten. When classrooms and playing fields are based on cooperation rather than competition, children feel better about themselves. They work with others instead of against them, and their self-esteem doesn't depend on winning a spelling bee or a Little League game."

Comment Re:This isn't necessarily bad (Score 1) 141

That's what I assumed as well. Buy Now Pay Later loans like this have a long history of being predatory. So I took a look at what it would cost to accept Klarna (as an example) as a merchant. The reality is that they have transaction fees that are very similar to credit cards. In other words, these companies do not need to rely on missed payments to make a profit.

These companies are apparently setting themselves up to replace traditional credit card payment systems, which suits me right down to the ground.

The difference is that it is much easier to get a Klarna account, and it isn't (yet) as widely available.

Comment Re:Credit Cards? (Score 2) 141

I felt the same way at first. Traditional BNPL schemes were very predatory. However, Klarna (and others) appear to be playing approximately the same game as the traditional credit card processors. They charge transaction fees that are roughly the same as credit card processors, and like credit cards their customers don't pay extra if they pay their bill on time. Klarna, in particular actually appears to give customers interest free time.

The difference, for consumers, is primarily that a Klarna account is much easier to get, and it isn't universally accepted. From a merchant perspective, depending on your payment provider, you might already be able to accept Klarna, and it appears that it mostly works like a credit card. It's even possible that charge backs are less of an issue, although it does appear that transaction fees are not given back in the case of a refund.

Personally, I am all for competition when it comes to payment networks. Visa and Mastercard are both devils. More competition for them is good for all of us.

Comment Re:I already know the ending (Score 1) 183

Fortunately he's incompetent and has already run Tesla into the ground. The company is basically living off schizoid incels buying the stock. SpaceX's success is largely based on the fact that they keep Musk away from actual management, but with Tesla a smoking ruin he's going to push his way into that and mess it up too.

Comment Security teams usually stop caring when not paid (Score 1) 167

From: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.vice.com%2Fen%2Farticl...
        ""The billionaires understand that they're playing a dangerous game," Rushkoff said. "They are running out of room to externalize the damage of the way that their companies operate. Eventually, there's going to be the social unrest that leads to your undoing."
        Like the gated communities of the past, their biggest concern was to find ways to protect themselves from the "unruly masses," Rushkoff said. "The question we ended up spending the majority of time on was: 'How do I maintain control of my security force after my money is worthless?'"
        That is, if their money is no longer worth anything -- if money no longer means power--how and why would a Navy Seal agree to guard a bunker for them?
        "Once they start talking in those terms, it's really easy to start puncturing a hole in their plan," Rushkoff said. "The most powerful people in the world see themselves as utterly incapable of actually creating a future in which everything's gonna be OK."

Slashdot Top Deals

What is wanted is not the will to believe, but the will to find out, which is the exact opposite. -- Bertrand Russell, "Skeptical Essays", 1928

Working...