Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:Did anyone do the math? (Score 1) 29

I wonder if anyone has ever done studies of ad effectiveness vs frequency. If they had half the ads which were twice as effective, they could charge twice as much and have the same revenue and happier customers. There must be some sort of Laffer curve. No ads == no revenue but more watchers. 100% ads == no watchers and no revenue. Where is the sweet spot? Some old TV shows were 55 minutes long with 5 minutes of ads, then they switched to 50 minutes and 10 minutes, so they have to chop out 5 minutes to syndicate them.

I bet the broadcasters have done the studies, and 10 minutes must produce more revenue, but I'd really like to see those studies.

Comment Bait and switch (Score 1) 29

When they first bundled video with Prime, I figured it wouldn't be too long before it went south. It did. Series would include the first season only, they dropped a few I was in the middle of watching, they shifted some shows to Britbox and others which had been included.

Then they announced the ads, and I stopped watching. You're going to charge me for something I didn't ask for and can't get a reduction for not using, AND you're going to interrupt it with ads? No thanks. Stopped watching. Well, I was probably watching only one or two shows a week.

But it still annoys me that I'm paying for something I don't want which is supported by ads. I've been watching how much free shipping I get and comparing to Walmart and others. It's close enough that one more screwup and I might dump Prime.

Comment Re:Could we "pull the plug" on networked computers (Score 1) 66

Thanks for the insightful replies. You're right that fiction can bee to optimistic. Still, it can be full of interesting ideas -- especially when someone like James P. Hogan with a technical background and also in contact with AI luminaries (like Marvin Minsky) writes about AI and robotics.

From the Manga version of "The Two Faces of Tomorrow":

"The Two Faces of Tomorrow: Battle Plan" where engineers and scientists see how hard it is to turn off a networked production system that has active repair drones:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fmangadex.org%2Fchapter%2F3...

"Pulling the Plug: Chapter 6, Volume 1, The Two Faces of Tomorrow" where something similar happens during an attempt to shut down a networked distributed supercomputer:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fmangadex.org%2Fchapter%2F4...

Granted, those are systems that have control of robots. But even without drones, consider:
"AI system resorts to blackmail if told it will be removed"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.bbc.com%2Fnews%2Fartic...

I first saw a related idea in "The Great Time Machine Hoax" from around 1963, where a supercomputer uses only printed letters with enclosed checks sent to companies to change the world to its preferences. It was insightful even back then to see how a computer could just hijack our social-economic system to its own benefit.

Arguably, modern corporation are a form of machine intelligence even if some of their components are human. I wrote about this in 2000:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fdougengelbart.org%2Fcoll...
"These corporate machine intelligences are already driving for better machine intelligences -- faster, more efficient, cheaper, and more resilient. People forget that corporate charters used to be routinely revoked for behavior outside the immediate public good, and that corporations were not considered persons until around 1886 (that decision perhaps being the first major example of a machine using the political/social process of its own ends). Corporate charters are granted supposedly because society believe it is in the best interest of *society* for corporations to exist. But, when was the last time people were able to pull the "charter" plug on a corporation not acting in the public interest? It's hard, and it will get harder when corporations don't need people to run themselves."

So, as another question, how easily can we-the-people "pull the plug" on corporations these days? I guess there are examples (Theranos?) but they seem to have more to do with fraud -- rather than a company found pursuing the ideal of capitalism of privatizing gains while socializing risks and costs.

It's not like, say, OpenAI is going to suffer any more consequences than the rest of us if AI kills everyone. And meanwhile, the people involved in OpenAI may get a lot of money and have a lot of "fun". From "You Have No Idea How Terrified AI Scientists Actually Are" at 2:25 (for some reason that part is missing from the YouTube automatic transcript):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
"Sam Altman: AI will probably lead to the end of the world but in the meantime there will be great companies created with serious machine learning."

Maybe we don't have an AI issue as much as a corporate governance issue? Which circles around to my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

Comment Re:fake news!!! (Score 2) 99

CPB and the government have been collected data directly from the airlines ever since the aftermath of 9/11 through a number of programs, for example to check passengers against watch lists and to verify the identity of travelers on international flights.

What has changed is that by buying data from a commerical broker instead of a a congressionally instituted program, it bypasses judicial review and limits set by Congress on data collected through those programs -- for example it can track passengers on domestic flights even if they're not on a watch list.

Comment Re: It's not a decline... (Score 1) 177

Fascism isn't an ideology; it's more like a disease of ideology. The main characteristic of fascist leaders is that they're unprincipled; they use ideology to control others, they're not bound by it themselves. It's not just that some fascists are left-wing and others are right-wing. Any given fascist leader is left-wing when it suits his purposes and right-wing when that works better for him. The Nazis were socialists until they got their hands on power and into bed with industry leaders, but it wasn't a turn to the right. The wealthy industrialists thought they were using Hitler, but it was the other way around. The same with Mussolini. He was socialist when he was a nobody but turned away from that when he lost his job at a socialist newspaper for advocating militarism and nationalism.

In any case, you should read Umberto Eco's essay on "Ur-Fascism", which tackles the extreme difficulties in characterizing fascism as an ideology (which as I stated I don't think it is). He actually lived under Mussolini.

Comment Could we "pull the plug" on networked computers? (Score 1) 66

I truly wish you are right about all the fear mongering about AI being a scam.

It's something I have been concerned about for decades, similar to the risk of nuclear war or biowarfare. One difference is that nukes and to a lesser extent plagues are more clearly distinguished as weapons of war and generally monopolized by nation-states -- whereas AI is seeing gradual adoption by everyone everywhere (and with a risk unexpected things might happen overnight if a computer network "wakes up" or is otherwise directed by humans to problematical ends). It's kind of like cars -- a generally useful tool -- could be turned turn into nukes overnight by a network software update (which they can't, thankfully). But how do you "pull the plug" on all cars -- especially if a transition from acting as a faithful companion to a "Christine" killer car happens overnight? Or even just all home routers or all networked smartphones get compromised? ISPs could put in place filtering in such cases, but how long could such filters last or be effective if the AI (or malevolent humans) responds?

If you drive a car with high-tech features, you are "trusting AI" in a sense. From 2019 on how AI was then already so much in our lives:
"The 10 Best Examples Of How AI Is Already Used In Our Everyday Life"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.forbes.com%2Fsites%2Fb...

A self-aware AI doing nasty stuff is likely more of a mid-to-long-term issue though. The bigger short-term issue is what people using AI do to other people with it (especially for economic disruption and wealth concentration, like Marshall Brain wrote about).

Turning off aspects of a broad network of modern technology have been explored in books like "The Two Faces of Tomorrow" (from 1979 by James P. Hogan). He suggests that turning off a global superintelligence network (a network that most people have come to depend on, and which embodies AI being used to do many tasks) may be a huge challenge (if not an impossible one). He suggested a network can gets smarter over time and unintentionally develop a survival instinct as a natural aspect of it trying to remain operation to do its purported primary function in the face of random power outages (like from lightning strikes).

But even if we wanted to turn off AI, would we? As a (poor) analogy, while there have been brief periods where the global internet supporting the world wide web has been restricted in some specific places, and also there is some selective filtering of the internet in various nations continuously ongoing (usually to give preference to local national web applications), could we be likely to turn off the global internet at this point even if it was proven somehow to greatly produce harms? We are so dependent on the internet for day-to-day commerce as well as, sigh, entertainment (i.e. so much "news") that I can wonder if such is even possible now collectively. The issue there is not technical (yes, IT server farm administrators and individual consumers with home PCs and smartphones could turn off every networked computer in theory) but social (would people do it).

Personally, I see value in many of the points Michael Greer makes in "Retrotopia" (especially about computer security, and also about chosen levels of technology as a form of technological "zoning"):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ftheworthyhouse.com%2F202...
"To maintain autarky, and for practical and philosophical reasons we will turn to in a minute, Lakeland rejects public funding of any technology past 1940, and imposes cultural strictures discouraging much private use of such technology. Even 1940s technology is not necessarily the standard; each county chooses to implement public infrastructure in one of five technological tiers, going back to 1820. The more retro, the lower the taxes. ... This is all an attempt to reify a major focus of Greer, what he calls âoedeliberate technological regression.â His idea is that we should not assume newer is better; we should instead âoemineâ the past for good ideas that are no longer extant, or were never adopted, and resurrect them, because they are cheaper and, in the long run, better than modern alternatives, which are pushed by those who rely on selling us unneeded items with planned obsolescence."

But Greer's novel still seems like a bit of a fantasy to suggest that a big part of the USA would willingly abandon networked computers in the future (even in the face of technological disasters) -- and even if it indeed might produce a better life. There was a Simpson's episode where everyone abandons TV for an afternoon and loves it, and then goes back to watching TV. It's a bit like saying a drug addict would willingly abandon a drug; some do of course, especially if the rest of the life improves in various ways for whatever reasons.

Also some of the benefit in Greer's novel comes from choosing decentralized technologies (whatever the form) in preference to more-easily centralized technologies (which is a concentration-of-wealth point in some ways rather than a strictly technological point). Contrast with the independent high-tech self-maintaining AI cybertanks in the Old Guy Cybertank novels who have built a sort-of freedom-emphasizing yet cooperative democracy (in the absence of humans).

In any case, we are talking about broad social changes with the adoption of AI. There is no single off switch for a network composed of billions of individual computers distributed across the planet -- especially if everyone has networked AI in their cars and smartphones (which is increasingly the case).

Comment Re:Don't forget Starlink (Score 1) 107

Back in the days of the Rainbow series, the Orange Book required that data that was marked as secure could not be transferred to any location or user who was (a) not authorised to access it or (b) did not have the security permissions regardless of any other authorisation. There was an additional protocol, though, listed in those manuals - I don't know if it was ever applied though - which stated that data could not be transferred to any device or any network that did not enforce the same security rules or was not authorised to access that data.

Regardless, in more modern times, these protocols were all abolished.

Had they not been, and had all protocols been put in place and enforced, then you could install all the unsecured connections and unsecured servers you liked, without limit. It wouldn't have made the slightest difference to actual security, because the full set of protocols would have required the system as a whole to not place sensitive data on such systems.

After the Clinton email server scandal, the Manning leaks, and the Snowden leaks, I'm astonished this wasn't done. I am dubious the Clinton scandal was actually anything like as bad as the claimants said, but it doesn't really matter. If these protocols were all in place, then it would be absolutely impossible for secure data to be transferred to unsecured devices, and absolutely impossible for secure data to be copied to machines that had no "need to know", regardless of any passwords obtained and any clearance obtained.

If people are using unsecured phones, unsecured protocols, unsecured satellite links, etc, it is not because we don't know how to enforce good policy, the documents on how to do this are old and could do with being updated but do in fact exist, as does the software that is capable of enforcing those rules. It is because a choice has been made, by some idiot or other, to consider the risks and consequences perfectly reasonable costs of doing business with companies like Microsoft, because companies like Microsoft simply aren't capable of producing systems that can achieve that kind of level of security and everyone knows it.

Comment Re:Honestly this is small potatoes (Score 1) 107

In and of itself, that's actually the worrying part.

In the 1930s, and even the first few years of the 1940s, a lot of normal (and relatively sane) people agreed completely with what the fascists were doing. In the Rhythm 0 "endurance art" by Marina Abramovi, normal (and relatively sane) people openly abused their right to do whatever they liked to her, at least up to the point where one tried to kill her with a gun that had been supplied as part of the installation, at which point the people realised they may have gone a little OTT.

Normal (and relatively sane) people will agree with, and support, all kinds of things most societies would regard as utterly evil, so long as (relative to some aspirational ideal) the evil is incremental, with each step in itself banal.

There are various (now-disputed) psychology experiments that attempted to study this phenomenon, but regardless of the credibility of those experiments, there's never really been much of an effort by any society to actually stop, think, and consider the possibility that maybe they're a little too willing to agree to stuff that maybe they shouldn't. People are very keen to assume that it's only other people who can fall into that trap.

Normal and sane is, sadly as Rhythm 0 showed extremely well, not as impressive as we'd all like to think it is. The veneer of civilisation is beautiful to behold, but runs awfully thin and chips easily. Normal and sane adults are not as distant from chimpanzees as our five million years of divergence would encourage us to think. Which is rather worrying, when you get right down to it.

Comment Re:Honestly this is small potatoes (Score 0) 107

Pretty much agree, I'd also add that we don't have a clear impression of who actually did the supposed rioting, the media were too busy being shot by the National Guard to get an overly-clear impression.

(We know during the BLM "riots" that a suspiciously large number of the "rioters" were later identified as white nationalists, and we know that in the British police spy scandal that the spies often advocated or led actions that were more violent than those the group they were in espoused, so I'd be wary of making any assumptions at the heat of the moment as to exactly who did what, until that is clearly and definitively known. If this had been a popular uprising, I would not have expected such small-scale disturbances - the race riots of the 60s, the Rodney King riots, the British riots in Brixton or Toxteth in the 80s, these weren't the minor events we're seeing in California, which are on a very very much smaller scale than the protest marches that have been taking place.)

This is different from the Jan 6th attempted coup, when those involved in the coup made it very clear they were indeed involved and where those involved were very clearly affiliated with domestic terrorist groups such as the Proud Boys. Let's get some clear answers as to exactly what scale was involved and who it involved, because, yes, this has a VERY Reichstag-fire vibe to it.

Comment Re:Honestly this is small potatoes (Score 2) 107

I would have to agree. There is no obvious end-goal of developing an America that is favourable to the global economy, to Americans, or even to himself, unless we assume that he meant what he said about ending elections and becoming a national dictator. The actions favour destabilisation, fragmentation, and the furthering of the goals of anyone with the power to become a global dictator.

Exactly who is pulling the strings is, I think, not quite so important. The Chechen leader has made it clear he sees himself as a future leader of the Russian Federation, and he wouldn't be the first tyrant to try and seize absolute power in the last few years. (Remember Wagner?) We can assume that there's plenty lurking in the shadows, guiding things subtly in the hopes that Putin will slip.

Comment Re:It's not a decline... (Score 4, Interesting) 177

I think people expect commercial social media networks to be something they can't be -- a kind of commons where you are exposed to the range of views that exist in your community. But that's not what makes social networks money, what makes them money is engagement, and consuming a variety of opinions is tiresome for users and bad for profits. When did you ever see social media trying to engage you with opinions you don't agree with or inform you about the breadth of opinion out there? It has never done that.

The old management of Twitter had a strategy of making it a big tent, comfortable for centrist views and centrist-adjacent views. This enabled it to function as a kind of limited town common for people who either weren't interested in politics, like authors or celebrities promoting their work, or who wanted to reach a large number of mainly apolitical people. This meant drawing lines on both sides of the political spectrum, and naturally people near the line on either side were continually furious with them.

It was an unnatural and unstable situation. As soon as Musk tried to broaden one side of the tent, polarization was inevitable. This means neither X nor Bluesky can be what Twitter was for advertisers and public figures looking for a broad audience.

At present I'm using Mastodon. For users of old Twitter, it must seem like an empty wasteland, but it's a non-commercial network, it has no business imperative to suck up every last free moment of my attention. I follow major news organizations who dutifully post major stories. I follow some interest groups which are active to a modest degree, some local groups who post on local issues, and a few celebrities like George Takei. *Everybody's* not on it, but that's OK; I don't want to spend more than a few minutes a day on the thing so I don't have time to follow everyone I might be interested in. Oh, and moderation is on a per-server basis, so you can choose a server where the admins have a policy you're OK with.

Slashdot Top Deals

Just because he's dead is no reason to lay off work.

Working...