Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:fake news!!! (Score 1) 58

CPB and the government have been collected data directly from the airlines ever since the aftermath of 9/11 through a number of programs, for example to check passengers against watch lists and to verify the identity of travelers on international flights.

What has changed is that by buying data from a commerical broker instead of a a congressionally instituted program, it bypasses judicial review and limits set by Congress on data collected through those programs -- for example it can track passengers on domestic flights even if they're not on a watch list.

Comment Re: It's not a decline... (Score 1) 162

Fascism isn't an ideology; it's more like a disease of ideology. The main characteristic of fascist leaders is that they're unprincipled; they use ideology to control others, they're not bound by it themselves. It's not just that some fascists are left-wing and others are right-wing. Any given fascist leader is left-wing when it suits his purposes and right-wing when that works better for him. The Nazis were socialists until they got their hands on power and into bed with industry leaders, but it wasn't a turn to the right. The wealthy industrialists thought they were using Hitler, but it was the other way around. The same with Mussolini. He was socialist when he was a nobody but turned away from that when he lost his job at a socialist newspaper for advocating militarism and nationalism.

In any case, you should read Umberto Eco's essay on "Ur-Fascism", which tackles the extreme difficulties in characterizing fascism as an ideology (which as I stated I don't think it is). He actually lived under Mussolini.

Comment Could we "pull the plug" on networked computers? (Score 1) 62

I truly wish you are right about all the fear mongering about AI being a scam.

It's something I have been concerned about for decades, similar to the risk of nuclear war or biowarfare. One difference is that nukes and to a lesser extent plagues are more clearly distinguished as weapons of war and generally monopolized by nation-states -- whereas AI is seeing gradual adoption by everyone everywhere (and with a risk unexpected things might happen overnight if a computer network "wakes up" or is otherwise directed by humans to problematical ends). It's kind of like cars -- a generally useful tool -- could be turned turn into nukes overnight by a network software update (which they can't, thankfully). But how do you "pull the plug" on all cars -- especially if a transition from acting as a faithful companion to a "Christine" killer car happens overnight? Or even just all home routers or all networked smartphones get compromised? ISPs could put in place filtering in such cases, but how long could such filters last or be effective if the AI (or malevolent humans) responds?

If you drive a car with high-tech features, you are "trusting AI" in a sense. From 2019 on how AI was then already so much in our lives:
"The 10 Best Examples Of How AI Is Already Used In Our Everyday Life"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.forbes.com%2Fsites%2Fb...

A self-aware AI doing nasty stuff is likely more of a mid-to-long-term issue though. The bigger short-term issue is what people using AI do to other people with it (especially for economic disruption and wealth concentration, like Marshall Brain wrote about).

Turning off aspects of a broad network of modern technology have been explored in books like "The Two Faces of Tomorrow" (from 1979 by James P. Hogan). He suggests that turning off a global superintelligence network (a network that most people have come to depend on, and which embodies AI being used to do many tasks) may be a huge challenge (if not an impossible one). He suggested a network can gets smarter over time and unintentionally develop a survival instinct as a natural aspect of it trying to remain operation to do its purported primary function in the face of random power outages (like from lightning strikes).

But even if we wanted to turn off AI, would we? As a (poor) analogy, while there have been brief periods where the global internet supporting the world wide web has been restricted in some specific places, and also there is some selective filtering of the internet in various nations continuously ongoing (usually to give preference to local national web applications), could we be likely to turn off the global internet at this point even if it was proven somehow to greatly produce harms? We are so dependent on the internet for day-to-day commerce as well as, sigh, entertainment (i.e. so much "news") that I can wonder if such is even possible now collectively. The issue there is not technical (yes, IT server farm administrators and individual consumers with home PCs and smartphones could turn off every networked computer in theory) but social (would people do it).

Personally, I see value in many of the points Michael Greer makes in "Retrotopia" (especially about computer security, and also about chosen levels of technology as a form of technological "zoning"):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ftheworthyhouse.com%2F202...
"To maintain autarky, and for practical and philosophical reasons we will turn to in a minute, Lakeland rejects public funding of any technology past 1940, and imposes cultural strictures discouraging much private use of such technology. Even 1940s technology is not necessarily the standard; each county chooses to implement public infrastructure in one of five technological tiers, going back to 1820. The more retro, the lower the taxes. ... This is all an attempt to reify a major focus of Greer, what he calls âoedeliberate technological regression.â His idea is that we should not assume newer is better; we should instead âoemineâ the past for good ideas that are no longer extant, or were never adopted, and resurrect them, because they are cheaper and, in the long run, better than modern alternatives, which are pushed by those who rely on selling us unneeded items with planned obsolescence."

But Greer's novel still seems like a bit of a fantasy to suggest that a big part of the USA would willingly abandon networked computers in the future (even in the face of technological disasters) -- and even if it indeed might produce a better life. There was a Simpson's episode where everyone abandons TV for an afternoon and loves it, and then goes back to watching TV. It's a bit like saying a drug addict would willingly abandon a drug; some do of course, especially if the rest of the life improves in various ways for whatever reasons.

Also some of the benefit in Greer's novel comes from choosing decentralized technologies (whatever the form) in preference to more-easily centralized technologies (which is a concentration-of-wealth point in some ways rather than a strictly technological point). Contrast with the independent high-tech self-maintaining AI cybertanks in the Old Guy Cybertank novels who have built a sort-of freedom-emphasizing yet cooperative democracy (in the absence of humans).

In any case, we are talking about broad social changes with the adoption of AI. There is no single off switch for a network composed of billions of individual computers distributed across the planet -- especially if everyone has networked AI in their cars and smartphones (which is increasingly the case).

Comment Re:Don't forget Starlink (Score 1) 102

Back in the days of the Rainbow series, the Orange Book required that data that was marked as secure could not be transferred to any location or user who was (a) not authorised to access it or (b) did not have the security permissions regardless of any other authorisation. There was an additional protocol, though, listed in those manuals - I don't know if it was ever applied though - which stated that data could not be transferred to any device or any network that did not enforce the same security rules or was not authorised to access that data.

Regardless, in more modern times, these protocols were all abolished.

Had they not been, and had all protocols been put in place and enforced, then you could install all the unsecured connections and unsecured servers you liked, without limit. It wouldn't have made the slightest difference to actual security, because the full set of protocols would have required the system as a whole to not place sensitive data on such systems.

After the Clinton email server scandal, the Manning leaks, and the Snowden leaks, I'm astonished this wasn't done. I am dubious the Clinton scandal was actually anything like as bad as the claimants said, but it doesn't really matter. If these protocols were all in place, then it would be absolutely impossible for secure data to be transferred to unsecured devices, and absolutely impossible for secure data to be copied to machines that had no "need to know", regardless of any passwords obtained and any clearance obtained.

If people are using unsecured phones, unsecured protocols, unsecured satellite links, etc, it is not because we don't know how to enforce good policy, the documents on how to do this are old and could do with being updated but do in fact exist, as does the software that is capable of enforcing those rules. It is because a choice has been made, by some idiot or other, to consider the risks and consequences perfectly reasonable costs of doing business with companies like Microsoft, because companies like Microsoft simply aren't capable of producing systems that can achieve that kind of level of security and everyone knows it.

Comment Re:Honestly this is small potatoes (Score 1) 102

In and of itself, that's actually the worrying part.

In the 1930s, and even the first few years of the 1940s, a lot of normal (and relatively sane) people agreed completely with what the fascists were doing. In the Rhythm 0 "endurance art" by Marina Abramovi, normal (and relatively sane) people openly abused their right to do whatever they liked to her, at least up to the point where one tried to kill her with a gun that had been supplied as part of the installation, at which point the people realised they may have gone a little OTT.

Normal (and relatively sane) people will agree with, and support, all kinds of things most societies would regard as utterly evil, so long as (relative to some aspirational ideal) the evil is incremental, with each step in itself banal.

There are various (now-disputed) psychology experiments that attempted to study this phenomenon, but regardless of the credibility of those experiments, there's never really been much of an effort by any society to actually stop, think, and consider the possibility that maybe they're a little too willing to agree to stuff that maybe they shouldn't. People are very keen to assume that it's only other people who can fall into that trap.

Normal and sane is, sadly as Rhythm 0 showed extremely well, not as impressive as we'd all like to think it is. The veneer of civilisation is beautiful to behold, but runs awfully thin and chips easily. Normal and sane adults are not as distant from chimpanzees as our five million years of divergence would encourage us to think. Which is rather worrying, when you get right down to it.

Comment Re:Honestly this is small potatoes (Score 0) 102

Pretty much agree, I'd also add that we don't have a clear impression of who actually did the supposed rioting, the media were too busy being shot by the National Guard to get an overly-clear impression.

(We know during the BLM "riots" that a suspiciously large number of the "rioters" were later identified as white nationalists, and we know that in the British police spy scandal that the spies often advocated or led actions that were more violent than those the group they were in espoused, so I'd be wary of making any assumptions at the heat of the moment as to exactly who did what, until that is clearly and definitively known. If this had been a popular uprising, I would not have expected such small-scale disturbances - the race riots of the 60s, the Rodney King riots, the British riots in Brixton or Toxteth in the 80s, these weren't the minor events we're seeing in California, which are on a very very much smaller scale than the protest marches that have been taking place.)

This is different from the Jan 6th attempted coup, when those involved in the coup made it very clear they were indeed involved and where those involved were very clearly affiliated with domestic terrorist groups such as the Proud Boys. Let's get some clear answers as to exactly what scale was involved and who it involved, because, yes, this has a VERY Reichstag-fire vibe to it.

Comment Re:Honestly this is small potatoes (Score 2) 102

I would have to agree. There is no obvious end-goal of developing an America that is favourable to the global economy, to Americans, or even to himself, unless we assume that he meant what he said about ending elections and becoming a national dictator. The actions favour destabilisation, fragmentation, and the furthering of the goals of anyone with the power to become a global dictator.

Exactly who is pulling the strings is, I think, not quite so important. The Chechen leader has made it clear he sees himself as a future leader of the Russian Federation, and he wouldn't be the first tyrant to try and seize absolute power in the last few years. (Remember Wagner?) We can assume that there's plenty lurking in the shadows, guiding things subtly in the hopes that Putin will slip.

Comment Re:It's not a decline... (Score 4, Interesting) 162

I think people expect commercial social media networks to be something they can't be -- a kind of commons where you are exposed to the range of views that exist in your community. But that's not what makes social networks money, what makes them money is engagement, and consuming a variety of opinions is tiresome for users and bad for profits. When did you ever see social media trying to engage you with opinions you don't agree with or inform you about the breadth of opinion out there? It has never done that.

The old management of Twitter had a strategy of making it a big tent, comfortable for centrist views and centrist-adjacent views. This enabled it to function as a kind of limited town common for people who either weren't interested in politics, like authors or celebrities promoting their work, or who wanted to reach a large number of mainly apolitical people. This meant drawing lines on both sides of the political spectrum, and naturally people near the line on either side were continually furious with them.

It was an unnatural and unstable situation. As soon as Musk tried to broaden one side of the tent, polarization was inevitable. This means neither X nor Bluesky can be what Twitter was for advertisers and public figures looking for a broad audience.

At present I'm using Mastodon. For users of old Twitter, it must seem like an empty wasteland, but it's a non-commercial network, it has no business imperative to suck up every last free moment of my attention. I follow major news organizations who dutifully post major stories. I follow some interest groups which are active to a modest degree, some local groups who post on local issues, and a few celebrities like George Takei. *Everybody's* not on it, but that's OK; I don't want to spend more than a few minutes a day on the thing so I don't have time to follow everyone I might be interested in. Oh, and moderation is on a per-server basis, so you can choose a server where the admins have a policy you're OK with.

Comment Re:whatever happened to transparent government? (Score 3, Insightful) 39

No, there are all kinds of information the government has that are legitimately not available. Sensitive data on private citizens, for example, which is why people are worried about unvetted DOGE employees getting unfettered access to federal systems. Information that would put witnesses in ongoing criminal investigations at risk. Military operations in progress and intelligence assets in use.

The problem is ever since there has been a legal means to keep that information secret, it's also been used to cover up government mistake and misconduct. It's perfectly reasonable for a government to keep things from its citizens *if there is a specific and articulable justification* that can withstand critical examination.

And sometimes those justifications are overridden by public interest concerns -- specifically when officials really want to bury something like the Pentagon Papers because they are embarrassing to the government. "Embarrassing to the government" should be an argument against secrecy, because of the public interest in knowing the government is doing embarrassing things. In the end, the embarrassment caused by the Pentagon Papers was *good* for the country.

Comment Re:"You Have No Idea How Terrified AI Scientists A (Score 1) 62

Yoshua Bengio is at least trying to do better (if one believe such systems need to be rushed out in any case):
"Godfather of AI Alarmed as Advanced Systems Quickly Learning to Lie, Deceive, Blackmail and Hack
"I'm deeply concerned by the behaviors that unrestrained agentic AI systems are already beginning to exhibit.""
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ffuturism.com%2Fai-godfat...
"In a blog post announcing LawZero, the new nonprofit venture, "AI godfather" Yoshua Bengio said that he has grown "deeply concerned" as AI models become ever more powerful and deceptive.
        "This organization has been created in response to evidence that today's frontier AI models have growing dangerous capabilities and [behaviors]," the world's most-cited computer scientist wrote, "including deception, cheating, lying, hacking, self-preservation, and more generally, goal misalignment." ...
      A pre-peer-review paper Bengio and his colleagues published earlier this year explains it a bit more simply.
      "This system is designed to explain the world from observations," the paper reads, "as opposed to taking actions in it to imitate or please humans."
      The concept of building "safe" AI is far from new, of course -- it's quite literally why several OpenAI researchers left OpenAI and founded Anthropic as a rival research lab.
      This one seems to be different because, unlike Anthropic, OpenAI, or any other companies that pay lip service to AI safety while still bringing in gobs of cash, Bengio's is a nonprofit -- though that hasn't stopped him from raising $30 million from the likes of ex-Google CEO Eric Schmidt, among others."

Yoshua Bengio seems like someone at least trying to make AI (scientists) from a cooperative abundance perspective rather than to create more competitive AI agents.

Of course, even that could go horribly wrong if the AI misleads people subtly.

From 1957: "A ten-year-old boy and Robby the Robot team up to prevent a Super Computer [which provided misleading outputs] from controlling the Earth from a satellite."
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.imdb.com%2Ftitle%2Ftt0...

Form 1992: "A Fire Upon the Deep" on an AI that misleads people exploring an old archive who though their exploratory AI work was airgapped and firewalled as they built advanced automation the AI suggested:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...

Lots of other sc-fi examples of deceptive AI (like in the Old Guy Cybertank series, and more). The worst being along the lines of a human (e.g. Dr. Smith of "Lost in Space") intentionally programming the AI (or Ai-powered Robot) to be harmful to others to that person's intended benefit.

Or sometimes (like in a Bobiverse novel, spoiler) a human may bypass a firewall and unleash an AI out of a sense of worshipful goodwill, to unknown consequences.

But at least the AI Scientist approach of Yoshua Bengio is not *totally* stupid in the way a reckless race to create competitive commercial super-intelligent AIs otherwise is for sure.

Some dark humor on that (with some links fixed up):
https://f6ffb3fa-34ce-43c1-939d-77e64deb3c0c.atarimworker.io/comments....
====
[People are] right to be skeptical on AI. But I can also see that it is so seductive as a "supernormal stimuli" it will have to be dealt with one way or another. Some AI-related dark humor by me.
* Contrast Sergey Brin this year:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ffinance.yahoo.com%2Fnews...
""Competition has accelerated immensely and the final race to AGI is afoot," he said in the memo. "I think we have all the ingredients to win this race, but we are going to have to turbocharge our efforts." Brin added that Gemini staff can boost their coding efficiency by using the company's own AI technology.
* With a Monty Python sketch from decades ago:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fgenius.com%2FMonty-pytho...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
"Well, you join us here in Paris, just a few minutes before the start of today's big event: the final of the Mens' Being Eaten By A Crocodile event. ...
          Gavin, does it ever worry you that you're actually going to be chewed up by a bloody great crocodile?
        (The only thing that worries me, Jim, is being being the first one down that gullet.)"
====

Comment Re:"You Have No Idea How Terrified AI Scientists A (Score 1) 62

If people shift their perspective to align with the idea in my sig or similar ideas from Albert Einstein, Buckminster Fuller, Ursula K Le Guin, James P. Hogan, Lewis Mumford, Donald Pet, and many others, there might be a chance for a positive outcome from AI: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

That is because our direction out of any singularity may have something to do with our moral direction going into it. So we desperately need to build a more inclusive, joyful, and healthy society right now.

But if we just continue extreme competition as usual between businesses and nations (especially for creating super-intelligent AI), then we are likely "cooked":
"it's over, we're cooked!" -- says [AI-generated] girl that literally does not exist (and she's right!)
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.reddit.com%2Fr%2Fsingu...

As just as one example, here is Eric Schmidt essentially saying that we are probably doomed if AI is used to create biowarfare agents (which it almost certainly will be if we don't change our scarcity-based perspective on using these tools of abundance):
"Dr. Eric Schmidt: Special Competitive Studies Project"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

Alternatives: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Frecogni...
"There is a fundamental mismatch between 21st century reality and 20th century security [and economic] thinking. Those "security" [and economic] agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all."

And also: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fbeyond-...
"This article explores the issue of a "Jobless Recovery" mainly from a heterodox economic perspective. It emphasizes the implications of ideas by Marshall Brain and others that improvements in robotics, automation, design, and voluntary social networks are fundamentally changing the structure of the economic landscape. It outlines towards the end four major alternatives to mainstream economic practice (a basic income, a gift economy, stronger local subsistence economies, and resource-based planning). These alternatives could be used in combination to address what, even as far back as 1964, has been described as a breaking "income-through-jobs link". This link between jobs and income is breaking because of the declining value of most paid human labor relative to capital investments in automation and better design. Or, as is now the case, the value of paid human labor like at some newspapers or universities is also declining relative to the output of voluntary social networks such as for digital content production (like represented by this document). It is suggested that we will need to fundamentally reevaluate our economic theories and practices to adjust to these new realities emerging from exponential trends in technology and society."

See also "The Case Against Competition" by Alfie Kohn:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.alfiekohn.org%2Farti...
"This is not to say that children shouldn't learn discipline and tenacity, that they shouldn't be encouraged to succeed or even have a nodding acquaintance with failure. But none of these requires winning and losing -- that is, having to beat other children and worry about being beaten. When classrooms and playing fields are based on cooperation rather than competition, children feel better about themselves. They work with others instead of against them, and their self-esteem doesn't depend on winning a spelling bee or a Little League game."

Slashdot Top Deals

System restarting, wait...

Working...