Forgot your password?
typodupeerror

Comment Re:A serious question (Score 1) 40

It's a good question and one I'm working on trying to get an answer to. By giving AI hard, complex engineering problems, and then getting engineers to look at the output to determine if that output is meaningful or just expensive gibberish.

By doing this, I'm trying to feel around the edges of what AI could reasonably be used for. The trivial engineering problems usually given to it are problems that can usually be solved by people in a similar length of time. I believe the typical savings from AI use are in the order of 15% or less, which is great if you're a gecko involved in car insurance, but not so good if you're a business.

If the really hard problems aren't solvable by AI at all (it's all just gibberish) then you can never improve on that figure. It's as good as it is going to get.

I've open sourced what AIs have come up with so far, if you want to take a look. Because that is what is going to tell you if good can come out of AI or not.

Comment Re:Employee conversation in work environment (Score 1, Interesting) 40

The conversations are not private, but PII laws nonetheless still apply. Anything in the messages that violates PII privacy laws is forbidden regardless of company policy. Policy cannot overrule the law.

Now, in the US, where privacy is a fiction and where double-dealing is not only perfectly acceptable but a part of workplace culture, that isn't too much of an issue. The laws exist on paper but have no real existence in practice.

However, business these days is international and American corps tend to forget that. Any conversation involving European computers (even if all employers and employees are in the US) falls under the GDPR and is under the aspices of the European courts and the ECHR, not the US legal system. And cloud servers are often in Ireland. Guess what. That means any conversation that takes place physically on those computers in Ireland plays by European rules, even if the virtual conversation was in the US.

This was settled by the courts a LONG time ago. If you carry out unlawful activities on a computer in a foreign country, you are subject to the laws of that country.

Comment Eric Schmidt on AI used to make bioweapons soon (Score 1) 13

From the transcript about 43 minutes in of a public conversion with Eric Schmidt from Apr 10, 2025: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
====
          "Question: Thanks for the great conversation so far. Leonard Justin. I'm a PhD student at MIT. Um, I was wondering if you could just discuss a bit more some of the risks you see coming specifically with respect to biology and how we should go about mitigating those. What's the role of the AI developers? What's the role of government? Um, yeah, how can we move forward on that?
        ----
        Schmidt: So, so you're going to know a lot more about this area than I, but speaking as an amateur in your field, the two current risks from these models are cyber and biorisks.
        The cyber ones are easy to understand. The system can generate cyber attacks and in theory can generate zero-day cyber attacks that we can't see and it can unleash them and furthermore it can do it at scale.
        In biology, you get some evil, you know, the equivalent of Osama bin Laden. They would start with an open-source model. Now these open source models have been restricted using a testing process. Uh they're called cards and they test it out and they delete that information from the model.
        It turns out it's relatively easy to un to reverse essentially those security modes around the model and that's a danger. So now you've got a model that can generate bad pathogens.
        Then the second thing you have to do is you have to find things to build them. Our collective assessment at the moment is that that's a nation state risk, not an individual terrorist risk. Although we could be wrong, but there's plenty of examples uh and this the the report talks about some of the Chinese examples where in theory if they wanted to they could not only manufacture bad things but sorry design them but also manufacture them.
        The good news and the reason we're all alive today is that the bio stuff is hard to manufacture and distribute and to make deadly and and spread and so forth and so on. Um there's lots of evidence for example that you can take a bad bio right now and modify it just enough that the testing regimes and the sort of surveillance regimes it bypasses and that's another threat.
        So that's what I worry about.
        But I think at the moment u our consensus is we're right below the threshold where this is an issue and the consensus in in my side of the industry is that one more or two more turns of the crank these issues will be -- and you know by then you'll be graduated and you can sort of help solve these problems.
        Um the a crank is turned every 18 months or so. This is about three years.
        ----
        Moderator: But theoretically, couldn't AI and biotechnology help you come up with a counter measure?
        ----
        Schmidt: Um, I had thought so, and that was the argument I made until I I do a lot of national security work. And there's a term called offense dominant. And an offense dominant is a is a situation in a military context where the attack cannot be countered at the same level as the attack. In other words, the damage is done.
        And most people, most biologists who've worked in this believe that while the model can be trained to counter this, the damage from the offense part is far greater than the ability to defend it, which is why we're so worried about it."
====

Ultimately, I feel a big part of the response to that threat needs to be a shift in perspective like through people laughing at my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity." :-)

Explored in more detail here:
"Recognizing irony is key to transcending militarism"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Frecogni...
        "... Biological weapons like genetically-engineered plagues are ironic because they are about using advanced life-altering biotechnology to fight over which old-fashioned humans get to occupy the planet. Why not just use advanced biotech to let people pick their skin color, or to create living arkologies and agricultural abundance for everyone everywhere?
        These militaristic socio-economic ironies would be hilarious if they were not so deadly serious. ...
        There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all. ..."

Comment Not interesting yet. (Score 4, Informative) 49

It's possible that cetaceans have a true language. They certainly have something that seems to function the same as a "hello, I am (name)", where the name part differs between all cetaceans but the surrounding clicks are identical. The response clicks also include that same phrase which researchers think serves the purpose of a name.

But we've done structural analysis to death and, yes, all the results are interesting (it seems to have high information content, in the Shannon sense, seems to have some sort of structure, and seems to have intriguing early-language features), but so does the Voynich Manuscript and there's a 99.9% chance that the Voynich Manuscript is a fraud with absolutely no meaning whatsoever. Structure only tells you if something is worth a closer look and we have known for a long time that cetacean clicks were worth a closer look. Further structural work won't tell us anything we don't already know.

What we need is to have a long-term recording of activities and clicks/whistles, where the sounds are recorded from many different directions (because they can be highly directional) and where the recording positively identifies the source of each sound, what that source was doing at the time (plus what they'd been doing immediately prior and what they do next), along with what they're focused on and where the sounds were directed (if they were). This sort of analysis is where any new information can be found.

But we also need to look at lessons learned in primate research, linguistics, sociology and anthropology, to understand what ISN'T going to work, in terms of approaches. In all three cases, we've learned that you learn best immersively, not from a distance. If an approach has failed in EVERY OTHER SOCIAL SCIENCE, then assuming it is going to work in cetacean research is stupid. It might be the correct way to go, but assuming it is is the bit that is stupid. If things fail repeatedly, regardless of where they are applied, then there's a decent chance it is necessary to ask that maybe the stuff that keeps failing is defective.

Comment Re:Disinfo (Score 1) 114

Only idiots spy in person. They either pay for an insider or do all monitoring remotely. When was the last time an actual foreign agent was caught in a base? Now look at the number of times they've used USB keys to import malware, used cash to pay off insiders, or used remote sensing technology like microphones capable of analysing vibrations in windows, or other tracking devices.

I'm looking at where spies are caught. And they are never caught trying to be janitors on bases. If they're caught at all, then it's because the people they bribed to do all the inside work were themselves caught.

You have to go by the evidence and the evidence doesn't suggest infiltration.

Comment Suggest people back up the archive (Score 1) 41

Look, it's obvious that this will cause an absolute flurry of lawsuits so deep that it will become the new record holder for the world's tallest mountain.

I don't think anyone seriously doubts that.

However, if enough geeks and nerds back up enough of the films each, it could become another DeCSS John/Beowulf moment, where the status quo (who aren't currently in this collection) is untenable and a new dynamic is forced on the industry. It's blatantly obvious the industry intends to be stupid and naive, and learn only through pain, misery, and suffering on all sides, but we can at least TRY to reduce the trauma as much as we can on our side of the equation.

Comment Who is sailing on a sinking ship? (Score 1) 160

First... We can't release this model because it doesn't work

Second... We need to convince the Christian right that they should use their influence to force this tech down everyone's throats.

Anthropic is going to go public, but this should be considered gross negligence because they are knowingly asking money for something they know can only decline.

Try the open models and tell me that they aren't good enough to replace Anthropic in 95% or more cases already. And how will Anthropic compete with free?

Why open models matter? Well, it's only a matter of a few years before even miniscule devices will be able to locally host AI.

Here's the next thing. You need to see AI as an onion. Neural networks are a series of layers. Last week, I was playing with running layers at differing cost levels of hardware. I uses a cluster of H200s for the outer layers and I used <$100 AI accelerators for the inner layers and I used an RTX3090 for the middle layers. I then tested coding and general nonsense like "what eyeshadow matches these earrings" questions. 85% of all questions were answered quickly on the $100 accelerator. 99% were answered with the cheapest two options. And remember, I wasn't running a small model, I was running a gigantic model sharded across a $100 device, a $1000 device and a $500,000 device. I reduced usage of the $500,000 device to almost nothing. I managed to achieve the same results at about a 20% performance drop on a 1 trillion parameter model while increasing compute density of a cluster of H200s by 100 fold.

So, what this means is that using extreme MoE models sharded properly and adding what currently is a $100 accelerator and soon will be a $5 accelerator and a thin layer in-between, assume a single RTX3090 class card for 1000 users (500 for better performance).... the case for massive inference data centers is screwed. Give me a grant and a few months, I am 100% sure I can get efficiency closer to 10,000x rather than 100x better. And no, this is not exaggerations. I would retrain the models to be spread across more... thinner layers with a LOT more experts. Of course, retraining something on the scale of a $1 trillion parameter model is expensive. What's great is, there is true value in China footing the bill for this because cutting their dependence on gigawatt data centers filled with NVidia and tons of HBM memory (possible literally) is a survival requirement.

If there's anyone in China reading this, take Qwen or Deepseek, spread them REALLY REALLY thin... then distribute the layers and open the weights. You'll make it so that companies like Huawei and the others can layers locally on devices as small as ESP32 and the distribute the layers outward. It was LM Studio's magical cross platform sharding which got me going on this. It just works. It's so simple. It just works.

Comment It will happen (Score 1) 90

It doesn't matter if it's Google, Meta, or Apple, it will happen. And the government will LOVE IT. Because it doesn't matter how good the techbro lawyer are. The government will gain access to the data. It would save many many billions in surveillance. It would place the burden of law enforcement on the techbros and they'll pay the price gladly for access to the personal data.

And if the heads up thing isn't good enough, expect everyone to start wearing cute hairclips, headphones, etc... that do the same thing. Glasses are nice for people like me. And the best part is, I would be the best of the assholes because I generally wear my glasses facing the ceiling until I need to read.

It's coming and it will be here soon. I believe even now, I could probably make a video capturing hair clip with android integration and all the fun stuff for maybe $25. And I'm sure Zuck can do it cheaper.

I think it will be funny when it becomes normalize for old men like me to wear hair clips

Comment Wrong solution (Score -1, Troll) 54

The addictive nature of social media is a serious problem, but it is not the fault of social media companies. It is the fault of local and national governments in failure to maintain services and failure to actually meet the costs of having a society. In the end, the price will be paid, but it has been paid through mental health.

Enough is enough. The sheer incompetence of successive administrations is a disgrace and a dishonour to this nation. The government should pay the bill for having a functional society, not create a pit of despair and then blame corporations for society jumping in. This is nobody's responsibility beyond Number 10.

Comment Re:Ban everything (Score 5, Interesting) 54

Sometimes it is the right and appropriate thing to do, but I'd hardly call it "first response". The Snowdrop Petition circulated after Dunblane, but not Hungerford. It took the repeated failure of government to actually do anything useful that caused society to demand a ban.

After the Traveller threre-day festival in a farmer's field, the UK government tried to ban going places for a common purpose. A man claiming to be the reincarnation of King Arthur sued on the grounds that he couldn't join up with his knights if that was illegal. The UK courts determined that he was vastly more credible and overturned the ban.

In the 1950s, when the government restricted freedom of movement, the Mass Kinder Trespass forced a right to roam act.

In short, we don't give a damn what the government wants, and never have. We know our rights and defend, whether that means increased freedom or introducing bans. The rules are decided by the public, the government has really no say in the matter and never has had.

Comment Re:Failure to understand != proof of pet theories (Score 2) 114

There's a problem with that -- it fools those whose opinions are irrelevant, but masks the presence of those whose actions are extremely relevant.

There is absolutely nothing easier than hiding in a group of nutters. With surveillance for the last 50 or so years being mostly remote and passive, that's all they need to do. As long as the signal-to-noise ratio is poor for those trying to maintain secrecy, but exceptionally good for those trying to steal those secrets, then such efforts are counterproductive.

The F-117 and B2 were so well-known to just about everyone that model kits of it were being sold in stores for 20-25 years before Congress were officially told of it existing. Why? Because the only thing the lies achieved was a total inability to detect that detailed plans were circulating amongst the public. By the time acknowledgement existed, the source of the leaks was so well-hidden by time that we will never discover how Airfix and other modelling companies were able to get the blueprints.

A glorious achievement of lies this was not. No, if you'd wanted to hide the program, then the USG needed to make this boring. The more boring and mundane the better. Make it such an utter snoozefest that the spies and nerds would stand out like a sore thumb, not be totally drowned out by the crowd.

Slashdot Top Deals

With all the fancy scientists in the world, why can't they just once build a nuclear balm?

Working...