Comment Re:We found no evidence that this caused any probl (Score 1) 15
Yeah, that's a change from the usual joke, and not in a good way: "We didn't investigate ourselves and found nothing wrong."
Yeah, that's a change from the usual joke, and not in a good way: "We didn't investigate ourselves and found nothing wrong."
The traveling salesman needs to leave home first. His quota included too many cities, so he's still waiting for a solution.
I wish AI would take away the jobs of Slashdot editors. AI would be less likely to post duplicate Slashvertisements.
The original comment made a false claim. You reject the explanation of why it is false by asserting that the actual problem is a different, non-falsifiable, behavior. That's not good-faith behavior.
National Security Letters are not magic fascism tools. They're subpoenas, so they can compel disclosure of "non-content" information -- but not order Nvidia to take actions like adding things to its products.
And if Nvidia refuses, the government cannot "legally just take them over now". They would need to take Nvidia to court, and those proceedings would bring a huge spotlight to the government's requests.
I didn't say anything about whether I believe that, but thanks for speculating?
A company I work with thought they could rely on that kind of clause (which is available in OpenAI's commercial-use contracts) in a different context, until security and contracts people disabused them of the notion.
What are you talking about, Willis?
The federal court case where the NYT is suing OpenAI supports something vaguely like you are claiming -- that OpenAI could be ordered to capture and turn over user data to the government, subject to it being approved and restricted in advance by a court. But this particular story is about federal agencies being able to use the same chatbot interface that any of us can use, except for cheap. This doesn't give them any special access to anyone else's data.
I'm not using any models for real coding, thanks to job responsibilities that keep me too busy doing other things.
But as a consolation prize, I will note that the output "eval rate" usually drops as the number of generated tokens rises, which makes intuitive sense. For the 120b size and 3960X, between 5.5 and 6.75 token/s; for 120b and M2 Max, between 20.16 and 23.83 token/s; for 20b and 3960X, between 9.42 and 10.74 token/s; for 20b and M2 Max, between 32.86 and 36.44 token/sec. Additionally, the 120b model gave substantially longer answers than the 20b model: averaged across 16 runs in each configuration, 8223 vs 4311 tokens.
a) um.
The slang dog walk is âoeto overpowerâ or âoeoutsmartâ someone, as if in utter control of them, as when walking a dog. The slang verb own is a close synonym.
Bezos refusing to let the Washington Poo endorse anyone is not anytime "getting dog walked". And the insane ranter's original comment misused the term, not me.
b) The paper could still criticize Trump, and they did. They just couldn't officially endorse Kamala Harris -- or Trump or anyone else.
c) You are the one who was nit-picking, even though you were picking imaginary nits. Rich newspaper owners is not news. rsilvergun has a long history of making things up and never apologizing or retracting when people call him out.
d) I have no idea what your are talking about, but it sounds off-topic.
e) I don't have any other Slashdot accounts, so no. Why are you posting as AC to defend a serial fabricator's misuse of slang?
Editors quitting because their boss wouldn't let the paper pick a side is fundamentally different from "journalists who tried to criticize Donald Trump get dog walked".
For ollama run gpt-oss:$SIZE --verbose --think true --hidethinking `cat prompt.txt` on a M2 Max Macbook Pro (96 GB RAM) versus a Threadripper 3960X (128 GB RAM, Geforce RTX 2080 Super):
120b model, M2 Max: 67.43 token/s prompt eval (221 tokens), 21.29 token/s output eval (8694 tokens)
20b model, M2 Max: 164.65 token/s prompt eval, 35.50 token/s output eval (4180 tokens)
120b model, 3960X: 18.08 token/s prompt eval, 6.00 token/s output eval (8916 tokens)
20b model, 3960X: 30.30 token/s prompt eval, 10.67 token/s output eval (4946 tokens)
This is a coding-oriented prompt, asking it to create a Go+Vue.js web app with only high-level direction about the structure of the app. Token rates should scale pretty linearly with memory bandwidth, so a big GPU should be faster than my Mac. Ollama logs showed a few of the layers in the 20b model getting run on the 3960X's GPU, but mostly the 3960X was running things on the CPU.
Voice recognition. Photo touch-ups. Speech synthesis, for vehicle driver assistance. Text processing that you don't want to delegate to The Cloud (ala Someone Else's Computer), at least for limited capabilities of it.
I got a Switch 2 while visiting Australia in mid-June, where stock was fairly available -- I was able to walk into Canberra Centre mall and get one with no reservation or wait. It's still out of stock near me in the US. I would guess that's an effect of tariffs.
And I don't understand the analogy with trading card games. My kids used to be into Magic, but those sets were always readily available at list price. Specific rare cards are expensive, but there's nothing analogous to that in video game sales -- paid loot boxes and gacha games seem out of vogue.
Yes, yes, the AC was not completely specific about the question.
When does private party A doing something illegal mean the government can fairly compel private party B to take some non-emergency action?
In an office environment, particularly one with a dress code, coveralls are an invisibility cloak that grants access to any wiring closet.
People say that a lot, but where I work, utility closets are locked. People need to (depending on the closet) know a combination, have the right badge, or have a physical key. And company policy is to escort people without badges to security.
The bigger the theory the better.