Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment The profit motive in medicine is too strong (Score 1) 28

There was a good chance of 23andme being able to make full-genome sequencing (which they weren't yet scaled up to) a commoditized service with the privacy protections that need to be ensured for all customers. However the business model collapsed along the way.

Now we see the most predictable outcome - someone who knows they can profit from the data is buying the remains of the company (with the data).

Only in the USA is the genomic data that valuable, and there is one sector of the economy who can benefit from it more than any other. Regeneron knows which industry that is, and while they aren't a direct part of it themselves they know they need to serve it.

Regeneron bought the data to eventually sell it to the Health Insurance Cartel. The Cartel was granted effective license to print money with the passage of the ACA, but they want more power. They still own an overwhelming majority of congress - on both sides of the aisle - but they want more power. With the genomic data they can start rewriting the rules on pre-existing conditions. As all other plans go up in price they can start offering plans that are less expensive if you consent to DNA testing, which will lead to treatment for some conditions being denied.

We can't win as long as the system is set up this way. We can't change it when the people who benefit from it control the people who set the rules.

Comment Could we "pull the plug" on networked computers? (Score 1) 64

I truly wish you are right about all the fear mongering about AI being a scam.

It's something I have been concerned about for decades, similar to the risk of nuclear war or biowarfare. One difference is that nukes and to a lesser extent plagues are more clearly distinguished as weapons of war and generally monopolized by nation-states -- whereas AI is seeing gradual adoption by everyone everywhere (and with a risk unexpected things might happen overnight if a computer network "wakes up" or is otherwise directed by humans to problematical ends). It's kind of like cars -- a generally useful tool -- could be turned turn into nukes overnight by a network software update (which they can't, thankfully). But how do you "pull the plug" on all cars -- especially if a transition from acting as a faithful companion to a "Christine" killer car happens overnight? Or even just all home routers or all networked smartphones get compromised? ISPs could put in place filtering in such cases, but how long could such filters last or be effective if the AI (or malevolent humans) responds?

If you drive a car with high-tech features, you are "trusting AI" in a sense. From 2019 on how AI was then already so much in our lives:
"The 10 Best Examples Of How AI Is Already Used In Our Everyday Life"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.forbes.com%2Fsites%2Fb...

A self-aware AI doing nasty stuff is likely more of a mid-to-long-term issue though. The bigger short-term issue is what people using AI do to other people with it (especially for economic disruption and wealth concentration, like Marshall Brain wrote about).

Turning off aspects of a broad network of modern technology have been explored in books like "The Two Faces of Tomorrow" (from 1979 by James P. Hogan). He suggests that turning off a global superintelligence network (a network that most people have come to depend on, and which embodies AI being used to do many tasks) may be a huge challenge (if not an impossible one). He suggested a network can gets smarter over time and unintentionally develop a survival instinct as a natural aspect of it trying to remain operation to do its purported primary function in the face of random power outages (like from lightning strikes).

But even if we wanted to turn off AI, would we? As a (poor) analogy, while there have been brief periods where the global internet supporting the world wide web has been restricted in some specific places, and also there is some selective filtering of the internet in various nations continuously ongoing (usually to give preference to local national web applications), could we be likely to turn off the global internet at this point even if it was proven somehow to greatly produce harms? We are so dependent on the internet for day-to-day commerce as well as, sigh, entertainment (i.e. so much "news") that I can wonder if such is even possible now collectively. The issue there is not technical (yes, IT server farm administrators and individual consumers with home PCs and smartphones could turn off every networked computer in theory) but social (would people do it).

Personally, I see value in many of the points Michael Greer makes in "Retrotopia" (especially about computer security, and also about chosen levels of technology as a form of technological "zoning"):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ftheworthyhouse.com%2F202...
"To maintain autarky, and for practical and philosophical reasons we will turn to in a minute, Lakeland rejects public funding of any technology past 1940, and imposes cultural strictures discouraging much private use of such technology. Even 1940s technology is not necessarily the standard; each county chooses to implement public infrastructure in one of five technological tiers, going back to 1820. The more retro, the lower the taxes. ... This is all an attempt to reify a major focus of Greer, what he calls âoedeliberate technological regression.â His idea is that we should not assume newer is better; we should instead âoemineâ the past for good ideas that are no longer extant, or were never adopted, and resurrect them, because they are cheaper and, in the long run, better than modern alternatives, which are pushed by those who rely on selling us unneeded items with planned obsolescence."

But Greer's novel still seems like a bit of a fantasy to suggest that a big part of the USA would willingly abandon networked computers in the future (even in the face of technological disasters) -- and even if it indeed might produce a better life. There was a Simpson's episode where everyone abandons TV for an afternoon and loves it, and then goes back to watching TV. It's a bit like saying a drug addict would willingly abandon a drug; some do of course, especially if the rest of the life improves in various ways for whatever reasons.

Also some of the benefit in Greer's novel comes from choosing decentralized technologies (whatever the form) in preference to more-easily centralized technologies (which is a concentration-of-wealth point in some ways rather than a strictly technological point). Contrast with the independent high-tech self-maintaining AI cybertanks in the Old Guy Cybertank novels who have built a sort-of freedom-emphasizing yet cooperative democracy (in the absence of humans).

In any case, we are talking about broad social changes with the adoption of AI. There is no single off switch for a network composed of billions of individual computers distributed across the planet -- especially if everyone has networked AI in their cars and smartphones (which is increasingly the case).

Comment Absolutely nothingburger (Score 0) 170

As a person well left of center on Bluesky, thereâ€(TM)s absolutely nothing I want to hear or interact with regarding the cesspool of MAGA and adjacent individuals. Seriously. Between the racism, sexism, misogyny, blatant hypocrisy, lies, destruction of truth and science and education, and literally anti *everything* that could help ANYONE that doesnâ€(TM)t offer an immediate quarterly ROI; fuck ‘em.

People act like echo chambers are bad. Maybe if youâ€(TM)re a knuckle dragging hateful anti science asshole. Sure. But MY echo chamber is filled with science, education, health, and generally being an empathetic human being towards other people, less fortunate people, and animals.

What interest do I have to listen to â€oethe other sideâ€? And Iâ€(TM)m happy for it.

Comment Does anyone care? (Score 1, Insightful) 27

I realize that the herd animals of finance and the illustrious thought leadership of linkedin essentially assume that you are making coal-powered buggy whips by banging rocks together if you aren't doing nation-state levels of capex on chatbots; but is there any evidence that Apple is actually suffering from their alleged deficiencies?

The most angry reaction I've heard, though I'm only privy to anecdotes rather than significant amounts of buying information, is from people who were pissed that Apple went down the hypebeast route by pre-announcing a bunch of AI faff that wasn't ready to ship; and that cut against their historical behavior of saying nothing or rubbishing a category until they decided to enter it; plus the ongoing acknowledgement that siri is useless.

Comment Did he rename his preferred existing parts? (Score 1) 106

The Trump administration has been largely a copy-paste production. When they initially wanted to "replace" the ACA back during his first term, their plan was to replace the ACA with the ACA - made better by putting his signature at the end instead of the signature of President Obama. When they were finally called out on that, they quietly dropped their efforts to repeal the ACA, instead focusing on various things they can do in the name of "border security" (nevermind that no effort has been made this term for the wall that he used to talk nonstop about).

Comment Re: It's not a decline... (Score 3) 170

I don't know where this notion that Bluesky is an echo chamber comes from.

Example: Go into a pro-AI thread from a popular user right as it's posted and write "AI is a con. It's blatant planet-destroying theft from actual creative people to create a stochastic parrot that bullshits what you want to hear. You're watching a ventriloquist doll and believing that it's actually alive."

Then go into an anti-AI thread from a popular user right as it's posted and write "AI is clearly Fair Use under the Google Books standard. And while one can debate what the word "thinks" means, AI isn't "statistics", but rather, applies complex chains of fuzzy logic to solve problems. The creative works it creates are truly its own."

In both cases, watch the fireworks explode.

Do the same thing on, say, whether to support Ukraine, on a NAFO account vs. a tankie account. Or whether China is good or bad. Or Israel vs. Palestine. On and on and on. In the vast majority of topics, all common sides are pretty well represented. It's just a handful of specific topics that I think certain right wingers are talking about when they complain about Bluesky underrepresenting one side (racism, sexism, etc).

Comment Re:It's not a decline... (Score 4, Insightful) 170

Huh? Takei is quite popular on Bluesky.

Also, this whole article is nonsense. Basically - like all sites - every time there is an event that triggers lots of signups, you get a mix of people who don't stick around, and people who do. So you get a curve that - without further events - steadily tapers down to something like 1/2 to 1/3rd of its peak. Except that you keep getting further events. When you plot out the long-term trends of Bluesky's userbase, they've been very much upwards, but it's come in the form of many individual spikes, each of which is followed by a decline to 1/2 to 1/3rd of the spike's peak (if allowed to run for long enough since the last spike). The most recent spike is IMHO notable for how little decline there's been since then.

I see basically zero migration from long-time users back to Twitter.

Comment Re: It's all so confusing (Score 0) 56

since the 1960s military has of course run other models and evaluated prototypes, and even some large cities law enforcement were having demos and doing evaluations though as far as I know there was no official purchases and use by any cops before 2002. But what were three letter agencies like CIA doing with them, there could be classified programs.

Slashdot Top Deals

When you go out to buy, don't show your silver.

Working...