Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:NVidia f*ed up here (Score 3, Insightful) 77

National Security Letters are not magic fascism tools. They're subpoenas, so they can compel disclosure of "non-content" information -- but not order Nvidia to take actions like adding things to its products.

And if Nvidia refuses, the government cannot "legally just take them over now". They would need to take Nvidia to court, and those proceedings would bring a huge spotlight to the government's requests.

Comment Re:Typing anything into a ChatGPT window... (Score 1) 25

I didn't say anything about whether I believe that, but thanks for speculating?

A company I work with thought they could rely on that kind of clause (which is available in OpenAI's commercial-use contracts) in a different context, until security and contracts people disabused them of the notion.

Comment Re:Typing anything into a ChatGPT window... (Score 1) 25

What are you talking about, Willis?

The federal court case where the NYT is suing OpenAI supports something vaguely like you are claiming -- that OpenAI could be ordered to capture and turn over user data to the government, subject to it being approved and restricted in advance by a court. But this particular story is about federal agencies being able to use the same chatbot interface that any of us can use, except for cheap. This doesn't give them any special access to anyone else's data.

Comment Re:So either an nVIDIA A100 or a maxed M2 Mac Pro (Score 1) 29

I'm not using any models for real coding, thanks to job responsibilities that keep me too busy doing other things.

But as a consolation prize, I will note that the output "eval rate" usually drops as the number of generated tokens rises, which makes intuitive sense. For the 120b size and 3960X, between 5.5 and 6.75 token/s; for 120b and M2 Max, between 20.16 and 23.83 token/s; for 20b and 3960X, between 9.42 and 10.74 token/s; for 20b and M2 Max, between 32.86 and 36.44 token/sec. Additionally, the 120b model gave substantially longer answers than the 20b model: averaged across 16 runs in each configuration, 8223 vs 4311 tokens.

Comment Re:So if you do actual journalism these days (Score 1) 123

a) um.

The slang dog walk is âoeto overpowerâ or âoeoutsmartâ someone, as if in utter control of them, as when walking a dog. The slang verb own is a close synonym.

Bezos refusing to let the Washington Poo endorse anyone is not anytime "getting dog walked". And the insane ranter's original comment misused the term, not me.

b) The paper could still criticize Trump, and they did. They just couldn't officially endorse Kamala Harris -- or Trump or anyone else.

c) You are the one who was nit-picking, even though you were picking imaginary nits. Rich newspaper owners is not news. rsilvergun has a long history of making things up and never apologizing or retracting when people call him out.

d) I have no idea what your are talking about, but it sounds off-topic.

e) I don't have any other Slashdot accounts, so no. Why are you posting as AC to defend a serial fabricator's misuse of slang?

Comment Re:So either an nVIDIA A100 or a maxed M2 Mac Pro (Score 2) 29

For ollama run gpt-oss:$SIZE --verbose --think true --hidethinking `cat prompt.txt` on a M2 Max Macbook Pro (96 GB RAM) versus a Threadripper 3960X (128 GB RAM, Geforce RTX 2080 Super):

120b model, M2 Max: 67.43 token/s prompt eval (221 tokens), 21.29 token/s output eval (8694 tokens)
20b model, M2 Max: 164.65 token/s prompt eval, 35.50 token/s output eval (4180 tokens)
120b model, 3960X: 18.08 token/s prompt eval, 6.00 token/s output eval (8916 tokens)
20b model, 3960X: 30.30 token/s prompt eval, 10.67 token/s output eval (4946 tokens)

This is a coding-oriented prompt, asking it to create a Go+Vue.js web app with only high-level direction about the structure of the app. Token rates should scale pretty linearly with memory bandwidth, so a big GPU should be faster than my Mac. Ollama logs showed a few of the layers in the 20b model getting run on the 3960X's GPU, but mostly the 3960X was running things on the CPU.

Comment Re:I can tell you they keep up with demand just fi (Score 2) 24

I got a Switch 2 while visiting Australia in mid-June, where stock was fairly available -- I was able to walk into Canberra Centre mall and get one with no reservation or wait. It's still out of stock near me in the US. I would guess that's an effect of tariffs.

And I don't understand the analogy with trading card games. My kids used to be into Magic, but those sets were always readily available at list price. Specific rare cards are expensive, but there's nothing analogous to that in video game sales -- paid loot boxes and gacha games seem out of vogue.

Comment Re:How did they plant (Score 2) 54

In an office environment, particularly one with a dress code, coveralls are an invisibility cloak that grants access to any wiring closet.

People say that a lot, but where I work, utility closets are locked. People need to (depending on the closet) know a combination, have the right badge, or have a physical key. And company policy is to escort people without badges to security.

Slashdot Top Deals

The bigger the theory the better.

Working...