Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Lots of reasons (Score 2) 8

The first and biggest is that AI tools tend to reinforce whatever you already believe because they are trying to maintain engagement in the same way social media does. So he's not really getting a second opinion from a trusted advisor he's got a crony telling him what he wants to hear.

That's before we talk about AI hallucinations and wildly inaccurate information you can get from them.

The only good reason to use AI to run a country is if you're Donald Trump and that's because it couldn't possibly get worse at that point

Comment Ryzen 5600 actually (Score 1) 20

But yeah in Trump's economy I'm not going to be doing much in the way of upgrades because he has completely wrecked everything and none of us know if we're going to have jobs in a week no matter where you work because we have a lunatic in charge of the economy.

But I'm sure you're not a Republican and you would never vote for trump. I bet you're not American cuz you're kind never is when you are called out.

Are you going to give me a tds? You know in a few years you'll be able to have me killed for that. I mean sure you're going to be homeless anyway but at least you'll know that I was murdered for having TDS and that will make the cat food you're eating out of a dumpster so much more delicious.

Comment It's shooting up in price (Score 3, Interesting) 20

Because one of the big manufacturers said they're going to stop making ddr4 so of course it's shooting up. The industry is moving on to ddr5.

There's nothing unprecedented about this it happens every time during the final transition between ram generations. The price will shoot up for a little while and then it will collapse as people's old ram makes it onto ebay.

I just bought 32 GB of dddr3 for an old i5 board I've got kicking around and I paid $30 for it shipped.

Comment Re:Yet they have 6 million slop articles (Score 1) 27

This doesn't seem right. So some obscure language might not have an article at all because someone hasn't written it in that language or facts are different or missing from one language to the next?

Despite how it seems to you, that is both right and correct.

It's correct because that's how it works, and it's right because requiring that articles in a given language be written by someone who speaks that language is a requirement for it to be known whether they are slop.

Seems wikipedia should be taking all these different language articles merging the most factual details from each into a master article and then creating translated articles

If you want translations, use a translation tool.

If you want details to be propagated from articles in languages you don't speak into the articles in languages you do speak, then make that happen.

If you don't want to put in the time to account for the barriers in place to prevent slop articles, Wikipedia doesn't want your input. Make your own encyclopedia. You may use Wikipedia articles as your starting point. GLWT!

Submission + - OpenAI Offers 20 Million User Chats In ChatGPT Lawsuit. NYT Wants 120 Million. (arstechnica.com)

An anonymous reader writes: OpenAI is preparing to raise what could be its final defense to stop The New York Times from digging through a spectacularly broad range of ChatGPT logs to hunt for any copyright-infringing outputs that could become the most damning evidence in the hotly watched case. In a joint letter (PDF) Thursday, both sides requested to hold a confidential settlement conference on August 7. Ars confirmed with the NYT's legal team that the conference is not about settling the case but instead was scheduled to settle one of the most disputed aspects of the case: news plaintiffs searching through millions of ChatGPT logs. That means it's possible that this week, ChatGPT users will have a much clearer understanding of whether their private chats might be accessed in the lawsuit. In the meantime, OpenAI has broken down (PDF) the "highly complex" process required to make deleted chats searchable in order to block the NYT's request for broader access.

Previously, OpenAI had vowed to stop what it deemed was the NYT's attempt to conduct "mass surveillance" of ChatGPT users. But ultimately, OpenAI lost its fight to keep news plaintiffs away from all ChatGPT logs. After that loss, OpenAI appears to have pivoted and is now doing everything in its power to limit the number of logs accessed in the case—short of settling—as its customers fretted over serious privacy concerns. For the most vulnerable users, the lawsuit threatened to expose ChatGPT outputs from sensitive chats that OpenAI had previously promised would be deleted. Most recently, OpenAI floated a compromise, asking the court to agree that news organizations didn't need to search all ChatGPT logs. The AI company cited the "only expert" who has so far weighed in on what could be a statistically relevant, appropriate sample size—computer science researcher Taylor Berg-Kirkpatrick. He suggested that a sample of 20 million logs would be sufficient to determine how frequently ChatGPT users may be using the chatbot to regurgitate articles and circumvent news sites' paywalls. But the NYT and other news organizations rejected the compromise, OpenAI said in a filing (PDF) yesterday. Instead, news plaintiffs have made what OpenAI said was an "extraordinary request that OpenAI produce the individual log files of 120 million ChatGPT consumer conversations."

That's six times more data than Berg-Kirkpatrick recommended, OpenAI argued. Complying with the request threatens to "increase the scope of user privacy concerns" by delaying the outcome of the case "by months," OpenAI argued. If the request is granted, it would likely trouble many users by extending the amount of time that users' deleted chats will be stored and potentially making them vulnerable to a breach or leak. As negotiations potentially end this week, OpenAI's co-defendant, Microsoft, has picked its own fight with the NYT over its internal ChatGPT equivalent tool that could potentially push the NYT to settle the disputes over ChatGPT logs.

Submission + - Google's New Genie 3 AI Model Creates Video Game Worlds In Real Time (theverge.com)

An anonymous reader writes: Google DeepMind is releasing a new version of its AI “world” model, called Genie 3, capable of generating 3D environments that users and AI agents can interact with in real time. The company is also promising that users will be able to interact with the worlds for much longer than before and that the model will actually remember where things are when you look away from them. [...] Genie 3 seems like it could be a notable step forward. Users will be able to generate worlds with a prompt that supports a “few” minutes of continuous interaction, which is up from the 10–20 seconds of interaction possible with Genie 2, according to a blog post.

Google says that Genie 3 can keep spaces in visual memory for about a minute, meaning that if you turn away from something in a world and then turn back to it, things like paint on a wall or writing on a chalkboard will be in the same place. The worlds will also have a 720p resolution and run at 24fps. DeepMind is adding what it calls “promptable world events” into Genie 3, too. Using a prompt, you’ll be able to do things like change weather conditions in a world or add new characters.

Comment So that's not the actual problem (Score 1) 67

The problem with having advanced credentials is that if you don't have experience then every potential employer looks at you as a short-term hire.

My kid ran into this where they couldn't go straight into grad school because it was made clear they wouldn't be able to get a job ever if they did.

Every perspective employer will look at your experience and they will agree that you're valuable and capable of doing good work and profitable work for them but they will also fully expect you to hang around just long enough to get a little bit of experience and then leave.

So they invest a bunch of money and you training and getting you up to speed and all that fun stuff and then right as you've got to the point where you're doing profitable work for them you leave for somewhere else, usually a competitor to make more money.

So in just about every field you have to get your undergrad degree, go work for a few years, come back and get your graduate degree and head on out.

At that point they're not expecting you to immediately leave the same way.

There are exceptions to this for hyperin demand employees but that's pretty much just AI programmers.

Basically extremely narrow specialties where companies don't have a choice but to hire whatever they can get.

What sucks is that the stupid ass big beautiful bill just killed all the funding to the programs my kid needed to go to grad school. So right as they were getting ready to start seriously applying for grad school they have the rug pulled out from under them.

If they could have gone directly into grad school that would have been fine but again, that's basically a career ender

Submission + - White House Orders NASA to Destroy Important Satellite (futurism.com)

ArchieBunker writes: The White House has instructed NASA employees to terminate two major, climate change-focused satellite missions.

As NPR reports, Trump officials reached out to the space agency to draw up plans for terminating the two missions, called the Orbiting Carbon Observatories. They've been collecting widely-used data, providing both oil and gas companies and farmers with detailed information about the distribution of carbon dioxide and how it can affect crop health.

One is attached to the International Space Station, and the other is collecting data as a stand-alone satellite. The latter would meet its permanent demise after burning up in the atmosphere if the mission were to be terminated.

We can only speculate as to why the Trump administration wants to end the missions. But considering president Donald Trump's staunch climate change denial and his administration's efforts to deal the agency's science directorate a potentially existential blow, it's not difficult to speculate.

Worse yet, the two observatories had been expected to function for many more years, scientists working on them told NPR. A 2023 review by NASA concluded that the data they'd been providing had been "of exceptionally high quality."

The observatories provide detailed carbon dioxide measurements across various locations, allowing scientists to get a detailed glimpse of how human activity is affecting greenhouse gas emissions.

Former NASA employee David Crisp, who worked on the Orbiting Carbon Observatories' instruments, told NPR that current staffers reached out to him.

"They were asking me very sharp questions," he said. "The only thing that would have motivated those questions was [that] somebody told them to come up with a termination plan."

Crisp said it "makes no economic sense to terminate NASA missions that are returning incredibly valuable data," pointing out it costs only $15 million per year to maintain both observatories, a tiny fraction of the agency's $25.4 billion budget.

Other scientists who've used data from the missions have also been asked questions related to terminating the missions.

The two observatories are only two of dozens of space missions facing existential threats in the form of the Trump administration's proposed 2026 fiscal year budget. Countless scientists have been outraged by the proposal, arguing it could precipitate an end to the United States' leadership in space.

Lawmakers have since drawn up a counteroffer that would keep NASA's budget roughly in line with this year's.

"We rejected cuts that would have devastated NASA science by 47 percent and would have terminated 55 operating and planned missions," said senator and top appropriator Chris Van Hollen (D-MD) in a July statement, as quoted by Bloomberg.

Simply terminating Earth-monitoring missions to pursue an anti-science agenda could be a massive self-own, lawmakers say — and potentially breaking laws as well by overriding existing, allocated budgets.

"Eliminating funds or scaling down the operations of Earth-observing satellites would be catastrophic and would severely impair our ability to forecast, manage, and respond to severe weather and climate disasters," House representative and Committee on Science, Space and Technology ranking member Zoe Lofgren (D-CA) told NPR.

"The Trump administration is forcing the proposed cuts in its FY26 budget request on already appropriated FY25 funds," she added. "This is illegal."

Comment Oh yeah you will shower in piss (Score 1) 49

We all will shower and piss water and drink piss water.

The clean water will go to the data centers and the processed water will be what us peasants are stuck with.

Initially that processed water will be heavily purified but over time budgets get cut and nobody wants to pay and who's going to pay anyway so before long you're going to start having all sorts of fun things in that water like some of the drugs that get pissed out or bacteria or whatever.

But within the next 10 to 15 years Americans are going to be without clean water and not just a handful of people living out in the sticks that nobody cares about.

But like I always say it's a small price to pay so we can have a panic attack about trans girls playing field hockey in the Midwest. If you don't like that moral panic I've got like five or six others you can choose from.

Comment Re:Onsite generation (Score 1) 49

Grid conditions are highly variable, and if you're in the AI biz, you aren't gonna want to shut down your LLMs for a heat wave.

There's plenty of "AI"-related processing which could be delayed and nobody would notice. Training of new models, for example. You get [access to] a new model a couple days later and you won't even notice, because you get it when you get it already. Google is also sufficiently distributed that they can simply move this processing to another location, since both the queries and the results are very small and there will be no appreciable delay associated with doing the processing far away.

Comment The llm is getting better (Score 2) 31

August of last year, Hashicorp decided to move its products away from open source licenses to a source-available license with fuzzy parameters on its use in production. Shortly afterwards, the community forked Terraform as OpenTF and then it was endorsed and picked up by the Linux Foundation as OpenTofu. Now the project is ready to declare a stable release that it says is a production-ready “drop-in replacement for Terraform.” OpenTofu isn’t a direct clone of Terraform, however. Kuba Martin, the interim technical lead of OpenTofu, says that the project is working to include client-side state encryption and other features that the community has proposed. Read the post for more details, but it looks like the project has made some strong strides in just a few months. As I wrote last year on The New Stack about the OpenTofu fork, the Linux Foundation made the right call to endorse this fork. Companies and open source projects had adopted Terraform as part of their infrastructure and contributed to its success under the idea that it was open source. The abrupt change to a non-OSI license – and one that’s poorly understood and intentionally vague – set organizations scrambling. Zero day licensing event

Comment Nepo babies (Score 1) 67

And whether you can keep your mouth shut or probably the big deciding factors.

The company doesn't have any really amazing tech they just are being allowed to run roughshod over all human privacy because they are tightly tied in with the American ruling class.

The thing about planatir is there a constant reminder that you have a ruling class. And that ruling class can do whatever they want to you whatever they want.

But hey, how about those trans girls playing field hockey? And what about da woke? If that doesn't work for you I've got violent video games and pornography. Take your pic of whatever moral panic you want to trade your entire economic future for.

Comment Re:Going for gold... (Score 2) 91

Focus group results are subject to two pretty obvious problems. One is that the kind of people who want to do them and have time to do them are not usually the people you actually want input from. Two is that the criteria for selecting focus group members can be selected for the purpose of getting a desired result, you read research that says certain types of people want certain things and then you select people like that to give positive feedback for your shitty ideas.

Slashdot Top Deals

CCI Power 6/40: one board, a megabyte of cache, and an attitude...

Working...