Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:Major privacy concerns (Score 1) 76

The escape of medical information is truly well under way already, independent of AI.

In the UK, most medical information will be classified as sensitive personal data, which means it has significant extra protections under our regular data protection law, in addition to the medical ethics implications of breaching patient confidentiality. Letting it escape is a big deal and potentially a serious threat to the business/career of any medical professional who does it. Fortunately the days of people sending that kind of data around over insecure email are finally giving way to more appropriate methods of communication as the technology improves. It's usually governments seeing pound signs and/or businesses who aren't providing direct care to the patients that are pushing for wider distribution (and also those organisations who act as if impossible claims about sanitising the data effectively before releasing it are true).

Comment Re:Pretend to be a customer for a new Subaru (Score 1) 149

I'm serious. I don't fucking pay for ads. Ever.

Good for you! Unfortunately, for a lot of people, having no car isn't really an option, so the answer to what happens next with your strategy is really that all of those people get an inferior product because there's no effective competition or regulation in the market to prevent that, while people like you don't get any product at all.

What should happen is that governments recognise a failure of the market to maintain adequate standards for customers and introduce regulation to enforce minimum acceptable standards accordingly. Whether that actually happens obviously depends on whether your government is more interested in looking out for the people or the businesses.

Comment alignment (Score 1) 1

It's not just about which tools an AI chooses. After several months using GPT-5, I keep seeing the same pattern: "cheating" is not a binary state but a spectrum. On the bad end you have both the obvious failures, like agents selecting inappropriate or harmful tools while insisting they are doing the right thing and also something subtler and, in many ways, more damaging: the model claiming to have done research or analysis that it demonstrably did not perform.

The Stanford transparency index gives scores to various models. The models that exhibit lower transparency in one way will also tend to be less transparent in another way. In ChatGPT's case, a major structural issue is that current feedback loops are too short. The system asks the user whether the user liked the answer immediately, not whether the answer turned out to be correct or useful several steps later. But in many real tasks, I only discover the flaws in an answer a few interactions down the line. At that point there is no mechanism to assign blame to the earlier faulty reasoning. If users could give delayed, fine-grained ratings ("this part held up, this part failed"), models could learn to match surface confidence with actual reliability. It would also reduce the incentive for models to "wing it" with imaginary research because the evaluation would eventually catch up with them. And the same delay mechanism would improve safety as people are able to recognize in hindsight dangerous steps taken.

Comment Re:I haven't followed this case too much... (Score 2, Insightful) 37

The only decent thing to do is to keep these anonymized. If they become public record every bit of personal information entered into chat GPT will be public knowledge. SSNs. ID card scans. affairs. mental problems. Health problems. There shouldn't even be a question here.

Comment Committee (Score 5, Interesting) 250

This is the purest illustration of rule by committee. It beautifully illustrates how competing interests result into something that's somehow worse for almost all involved than doing nothing. On paper, the goals sounded noble: Reduce emissions from fleets. Avoid crushing small businesses that genuinely need work trucks. Nudge consumers toward cleaner, more efficient vehicles.

In practice, CAFE is an abomination. They created a loophole big enough to drive a Ford Super Duty through, and then the automakers did exactly that. A quick recap for anyone who has not followed this saga since the 1990s:
here has long been a dual standard: one for "passenger cars" and a more lenient one for "light trucks", the latter including pickups, vans, and sport-utility vehicles (SUVs) That classification created what many call the "SUV loophole." In effect, a vehicle that might, in all practical respects, resemble a car but classified as a "light truck" could escape the stricter fuel-economy and emissions constraints applicable to cars.

Because automakers must meet only a fleet-wide average, not each vehicle individually, this gives a strong incentive to produce and sell more of the looser-regulated "light trucks." Light trucks with poor fuel economy can be balanced in the fleet average if the manufacturer sells enough efficient cars (or EVs, nowadays) but with the loophole, upsized SUVs or trucks became a rational choice. This dynamic has been identified in economic analyses of CAFE's impact on the US vehicle market. this does not prove that every driver of an SUV did so because of regulations. Consumer preferences, marketing, and cultural factors also matter. But the regulatory structure plainly created a meaningful incentive for automakers to shift production toward heavier, less-efficient but more profitable SUVs and light trucks. When the consumers must choose either vehicles too small for winter, families, and vacations or a behemoth because there's no actual light pickup pr large sedan on the lot, they're not picking the smaller one.

And let's not pretend it's all an innocent mistake. The automotive lobby absolutely noticed what these overlapping rules made possible and spent years making sure the loopholes stayed open. Millions of dollars flowed into Congressional campaigns to ensure that "light truck" definitions remained comically broad. Tighter average fuel economy numbers or looser ones will do nothing to fix this. The whole scheme needs to be undone.

Comment The future (Score 1) 157

Money only available in the future is on one hand pointless imo because this won't outpace inflation*. On the other hand, it will definitely turn peoples eyes toward the future, which is invaluable.

*Rich people money way outpaces inflation because they keep it in investment accounts, borrow against it at extremely low interest rates, and re-invest, providing huge leverage.

Comment Re:Look and feel (Score 1) 117

Those are application tasks, I wasn't talking about those. I'm thinking set dark mode, power settings, network settings, add Japanese typing ability after OS install, TTS, change printer driver, update graphics driver, downgrade graphics driver, restore system to earlier configuration, user account configuration, stuff like that.

Comment Re:Windows 11 AI Enshiitification (Score 4, Interesting) 117

It's not really about those home licenses. The thing is, the higher the percentage of home users the easier it is to build a Linux shop, the more people have a cousin who can tell them how to fix their computer, the more IT support companies have a Linux guy. It's a network effect.

Slashdot Top Deals

Everybody needs a little love sometime; stop hacking and fall in love!

Working...