Comment Look on the bright side. (Score 1) 18
With the ESA supplying the spacecraft, most of the software is likely to be competently written and/or open-source. This will prove to the Martians that there is indeed intelligent life on Earth.
With the ESA supplying the spacecraft, most of the software is likely to be competently written and/or open-source. This will prove to the Martians that there is indeed intelligent life on Earth.
It's a good question and one I'm working on trying to get an answer to. By giving AI hard, complex engineering problems, and then getting engineers to look at the output to determine if that output is meaningful or just expensive gibberish.
By doing this, I'm trying to feel around the edges of what AI could reasonably be used for. The trivial engineering problems usually given to it are problems that can usually be solved by people in a similar length of time. I believe the typical savings from AI use are in the order of 15% or less, which is great if you're a gecko involved in car insurance, but not so good if you're a business.
If the really hard problems aren't solvable by AI at all (it's all just gibberish) then you can never improve on that figure. It's as good as it is going to get.
I've open sourced what AIs have come up with so far, if you want to take a look. Because that is what is going to tell you if good can come out of AI or not.
I would think it goes further. If the company has already failed (ie: no longer exists) then the action is not taken by the company but a former employee of that company (even if said former employee was the CEO). Former employees are not granted special authority over PII or over company-owned information.
Irrelevant. PII protections are not subject to company discresion.
The conversations are not private, but PII laws nonetheless still apply. Anything in the messages that violates PII privacy laws is forbidden regardless of company policy. Policy cannot overrule the law.
Now, in the US, where privacy is a fiction and where double-dealing is not only perfectly acceptable but a part of workplace culture, that isn't too much of an issue. The laws exist on paper but have no real existence in practice.
However, business these days is international and American corps tend to forget that. Any conversation involving European computers (even if all employers and employees are in the US) falls under the GDPR and is under the aspices of the European courts and the ECHR, not the US legal system. And cloud servers are often in Ireland. Guess what. That means any conversation that takes place physically on those computers in Ireland plays by European rules, even if the virtual conversation was in the US.
This was settled by the courts a LONG time ago. If you carry out unlawful activities on a computer in a foreign country, you are subject to the laws of that country.
From the transcript about 43 minutes in of a public conversion with Eric Schmidt from Apr 10, 2025: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
====
"Question: Thanks for the great conversation so far. Leonard Justin. I'm a PhD student at MIT. Um, I was wondering if you could just discuss a bit more some of the risks you see coming specifically with respect to biology and how we should go about mitigating those. What's the role of the AI developers? What's the role of government? Um, yeah, how can we move forward on that?
----
Schmidt: So, so you're going to know a lot more about this area than I, but speaking as an amateur in your field, the two current risks from these models are cyber and biorisks.
The cyber ones are easy to understand. The system can generate cyber attacks and in theory can generate zero-day cyber attacks that we can't see and it can unleash them and furthermore it can do it at scale.
In biology, you get some evil, you know, the equivalent of Osama bin Laden. They would start with an open-source model. Now these open source models have been restricted using a testing process. Uh they're called cards and they test it out and they delete that information from the model.
It turns out it's relatively easy to un to reverse essentially those security modes around the model and that's a danger. So now you've got a model that can generate bad pathogens.
Then the second thing you have to do is you have to find things to build them. Our collective assessment at the moment is that that's a nation state risk, not an individual terrorist risk. Although we could be wrong, but there's plenty of examples uh and this the the report talks about some of the Chinese examples where in theory if they wanted to they could not only manufacture bad things but sorry design them but also manufacture them.
The good news and the reason we're all alive today is that the bio stuff is hard to manufacture and distribute and to make deadly and and spread and so forth and so on. Um there's lots of evidence for example that you can take a bad bio right now and modify it just enough that the testing regimes and the sort of surveillance regimes it bypasses and that's another threat.
So that's what I worry about.
But I think at the moment u our consensus is we're right below the threshold where this is an issue and the consensus in in my side of the industry is that one more or two more turns of the crank these issues will be -- and you know by then you'll be graduated and you can sort of help solve these problems.
Um the a crank is turned every 18 months or so. This is about three years.
----
Moderator: But theoretically, couldn't AI and biotechnology help you come up with a counter measure?
----
Schmidt: Um, I had thought so, and that was the argument I made until I I do a lot of national security work. And there's a term called offense dominant. And an offense dominant is a is a situation in a military context where the attack cannot be countered at the same level as the attack. In other words, the damage is done.
And most people, most biologists who've worked in this believe that while the model can be trained to counter this, the damage from the offense part is far greater than the ability to defend it, which is why we're so worried about it."
====
Ultimately, I feel a big part of the response to that threat needs to be a shift in perspective like through people laughing at my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."
Explored in more detail here:
"Recognizing irony is key to transcending militarism"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Frecogni...
"... Biological weapons like genetically-engineered plagues are ironic because they are about using advanced life-altering biotechnology to fight over which old-fashioned humans get to occupy the planet. Why not just use advanced biotech to let people pick their skin color, or to create living arkologies and agricultural abundance for everyone everywhere?
These militaristic socio-economic ironies would be hilarious if they were not so deadly serious.
There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all.
It's possible that cetaceans have a true language. They certainly have something that seems to function the same as a "hello, I am (name)", where the name part differs between all cetaceans but the surrounding clicks are identical. The response clicks also include that same phrase which researchers think serves the purpose of a name.
But we've done structural analysis to death and, yes, all the results are interesting (it seems to have high information content, in the Shannon sense, seems to have some sort of structure, and seems to have intriguing early-language features), but so does the Voynich Manuscript and there's a 99.9% chance that the Voynich Manuscript is a fraud with absolutely no meaning whatsoever. Structure only tells you if something is worth a closer look and we have known for a long time that cetacean clicks were worth a closer look. Further structural work won't tell us anything we don't already know.
What we need is to have a long-term recording of activities and clicks/whistles, where the sounds are recorded from many different directions (because they can be highly directional) and where the recording positively identifies the source of each sound, what that source was doing at the time (plus what they'd been doing immediately prior and what they do next), along with what they're focused on and where the sounds were directed (if they were). This sort of analysis is where any new information can be found.
But we also need to look at lessons learned in primate research, linguistics, sociology and anthropology, to understand what ISN'T going to work, in terms of approaches. In all three cases, we've learned that you learn best immersively, not from a distance. If an approach has failed in EVERY OTHER SOCIAL SCIENCE, then assuming it is going to work in cetacean research is stupid. It might be the correct way to go, but assuming it is is the bit that is stupid. If things fail repeatedly, regardless of where they are applied, then there's a decent chance it is necessary to ask that maybe the stuff that keeps failing is defective.
Only idiots spy in person. They either pay for an insider or do all monitoring remotely. When was the last time an actual foreign agent was caught in a base? Now look at the number of times they've used USB keys to import malware, used cash to pay off insiders, or used remote sensing technology like microphones capable of analysing vibrations in windows, or other tracking devices.
I'm looking at where spies are caught. And they are never caught trying to be janitors on bases. If they're caught at all, then it's because the people they bribed to do all the inside work were themselves caught.
You have to go by the evidence and the evidence doesn't suggest infiltration.
Regulation in the modern US is not about a level playing field, it is about corrupt businesses and corrupt officials using regulations as an excuse to persecute competitors.
Look, it's obvious that this will cause an absolute flurry of lawsuits so deep that it will become the new record holder for the world's tallest mountain.
I don't think anyone seriously doubts that.
However, if enough geeks and nerds back up enough of the films each, it could become another DeCSS John/Beowulf moment, where the status quo (who aren't currently in this collection) is untenable and a new dynamic is forced on the industry. It's blatantly obvious the industry intends to be stupid and naive, and learn only through pain, misery, and suffering on all sides, but we can at least TRY to reduce the trauma as much as we can on our side of the equation.
The addictive nature of social media is a serious problem, but it is not the fault of social media companies. It is the fault of local and national governments in failure to maintain services and failure to actually meet the costs of having a society. In the end, the price will be paid, but it has been paid through mental health.
Enough is enough. The sheer incompetence of successive administrations is a disgrace and a dishonour to this nation. The government should pay the bill for having a functional society, not create a pit of despair and then blame corporations for society jumping in. This is nobody's responsibility beyond Number 10.
Sometimes it is the right and appropriate thing to do, but I'd hardly call it "first response". The Snowdrop Petition circulated after Dunblane, but not Hungerford. It took the repeated failure of government to actually do anything useful that caused society to demand a ban.
After the Traveller threre-day festival in a farmer's field, the UK government tried to ban going places for a common purpose. A man claiming to be the reincarnation of King Arthur sued on the grounds that he couldn't join up with his knights if that was illegal. The UK courts determined that he was vastly more credible and overturned the ban.
In the 1950s, when the government restricted freedom of movement, the Mass Kinder Trespass forced a right to roam act.
In short, we don't give a damn what the government wants, and never have. We know our rights and defend, whether that means increased freedom or introducing bans. The rules are decided by the public, the government has really no say in the matter and never has had.
There's a problem with that -- it fools those whose opinions are irrelevant, but masks the presence of those whose actions are extremely relevant.
There is absolutely nothing easier than hiding in a group of nutters. With surveillance for the last 50 or so years being mostly remote and passive, that's all they need to do. As long as the signal-to-noise ratio is poor for those trying to maintain secrecy, but exceptionally good for those trying to steal those secrets, then such efforts are counterproductive.
The F-117 and B2 were so well-known to just about everyone that model kits of it were being sold in stores for 20-25 years before Congress were officially told of it existing. Why? Because the only thing the lies achieved was a total inability to detect that detailed plans were circulating amongst the public. By the time acknowledgement existed, the source of the leaks was so well-hidden by time that we will never discover how Airfix and other modelling companies were able to get the blueprints.
A glorious achievement of lies this was not. No, if you'd wanted to hide the program, then the USG needed to make this boring. The more boring and mundane the better. Make it such an utter snoozefest that the spies and nerds would stand out like a sore thumb, not be totally drowned out by the crowd.
With all the fancy scientists in the world, why can't they just once build a nuclear balm?