Comment Re: It seems (Score 1) 84
There was a survivor from the plane who described a loud bang before the plane crashed.
There was a survivor from the plane who described a loud bang before the plane crashed.
I do not argue it is not possible. I and others do not think Meta is capable of this.
The only thing you need is money, which Meta has. Hire good researchers and give them plenty of budget for hardware.
This places an absolute upper size on the alien battlefleet seeking to use Earth as a food source.
The most ambitious promise from google of old was to try not to make it a worse place, but that was rescinded early on.
"Don't be evil" has not been rescinded, it's still in the employee handbook.
Trump's abandoned The Wall, as he found that the album doesn't mention Mexico even once, although he found the marching hammers very inspiring.
Back in the days of the Rainbow series, the Orange Book required that data that was marked as secure could not be transferred to any location or user who was (a) not authorised to access it or (b) did not have the security permissions regardless of any other authorisation. There was an additional protocol, though, listed in those manuals - I don't know if it was ever applied though - which stated that data could not be transferred to any device or any network that did not enforce the same security rules or was not authorised to access that data.
Regardless, in more modern times, these protocols were all abolished.
Had they not been, and had all protocols been put in place and enforced, then you could install all the unsecured connections and unsecured servers you liked, without limit. It wouldn't have made the slightest difference to actual security, because the full set of protocols would have required the system as a whole to not place sensitive data on such systems.
After the Clinton email server scandal, the Manning leaks, and the Snowden leaks, I'm astonished this wasn't done. I am dubious the Clinton scandal was actually anything like as bad as the claimants said, but it doesn't really matter. If these protocols were all in place, then it would be absolutely impossible for secure data to be transferred to unsecured devices, and absolutely impossible for secure data to be copied to machines that had no "need to know", regardless of any passwords obtained and any clearance obtained.
If people are using unsecured phones, unsecured protocols, unsecured satellite links, etc, it is not because we don't know how to enforce good policy, the documents on how to do this are old and could do with being updated but do in fact exist, as does the software that is capable of enforcing those rules. It is because a choice has been made, by some idiot or other, to consider the risks and consequences perfectly reasonable costs of doing business with companies like Microsoft, because companies like Microsoft simply aren't capable of producing systems that can achieve that kind of level of security and everyone knows it.
In and of itself, that's actually the worrying part.
In the 1930s, and even the first few years of the 1940s, a lot of normal (and relatively sane) people agreed completely with what the fascists were doing. In the Rhythm 0 "endurance art" by Marina Abramovi, normal (and relatively sane) people openly abused their right to do whatever they liked to her, at least up to the point where one tried to kill her with a gun that had been supplied as part of the installation, at which point the people realised they may have gone a little OTT.
Normal (and relatively sane) people will agree with, and support, all kinds of things most societies would regard as utterly evil, so long as (relative to some aspirational ideal) the evil is incremental, with each step in itself banal.
There are various (now-disputed) psychology experiments that attempted to study this phenomenon, but regardless of the credibility of those experiments, there's never really been much of an effort by any society to actually stop, think, and consider the possibility that maybe they're a little too willing to agree to stuff that maybe they shouldn't. People are very keen to assume that it's only other people who can fall into that trap.
Normal and sane is, sadly as Rhythm 0 showed extremely well, not as impressive as we'd all like to think it is. The veneer of civilisation is beautiful to behold, but runs awfully thin and chips easily. Normal and sane adults are not as distant from chimpanzees as our five million years of divergence would encourage us to think. Which is rather worrying, when you get right down to it.
Pretty much agree, I'd also add that we don't have a clear impression of who actually did the supposed rioting, the media were too busy being shot by the National Guard to get an overly-clear impression.
(We know during the BLM "riots" that a suspiciously large number of the "rioters" were later identified as white nationalists, and we know that in the British police spy scandal that the spies often advocated or led actions that were more violent than those the group they were in espoused, so I'd be wary of making any assumptions at the heat of the moment as to exactly who did what, until that is clearly and definitively known. If this had been a popular uprising, I would not have expected such small-scale disturbances - the race riots of the 60s, the Rodney King riots, the British riots in Brixton or Toxteth in the 80s, these weren't the minor events we're seeing in California, which are on a very very much smaller scale than the protest marches that have been taking place.)
This is different from the Jan 6th attempted coup, when those involved in the coup made it very clear they were indeed involved and where those involved were very clearly affiliated with domestic terrorist groups such as the Proud Boys. Let's get some clear answers as to exactly what scale was involved and who it involved, because, yes, this has a VERY Reichstag-fire vibe to it.
Why is Congress not fighting in the courts to regain power?
They don't need to go to court, all they need to do is to pass legislation (and maybe override a veto). They don't really even need to take powers back from the president, just more clearly define what constitutes an "emergency". Trump's most egregious actions are justified under statutes that grant him exceptional emergency powers -- which makes sense. When an emergency happens you want the executive to be able to respond quickly, and Congress is never fast. But those statutes assume that the president will only declare an emergency when there's actually an emergency because. Until now that hasn't been an unreasonable assumption.
But right now the GOP controls Congress, and the GOP is utterly subservient to Trump. They're not going to stand up to him. In the 2026 election this is likely to change, but probably only in the House, while the Senate will remain under GOP control, so Congress will still not stand up to Trump.
That said, it's increasingly looking like the courts will step in and declare that Congress is not allowed to abdicate its responsibility. There are existing Supreme Court precedents that establish that Congress is not permitted to delegate its authority to the executive. Congress can allow the executive to define detailed regulations after Congress defines the broad strokes, but they can't simply turn whole chunks of their constitutional authority over to the executive, even if they want to. Given the makeup of the current Supreme Court this is less certain than we would like, but I think it will go the right way.
I would have to agree. There is no obvious end-goal of developing an America that is favourable to the global economy, to Americans, or even to himself, unless we assume that he meant what he said about ending elections and becoming a national dictator. The actions favour destabilisation, fragmentation, and the furthering of the goals of anyone with the power to become a global dictator.
Exactly who is pulling the strings is, I think, not quite so important. The Chechen leader has made it clear he sees himself as a future leader of the Russian Federation, and he wouldn't be the first tyrant to try and seize absolute power in the last few years. (Remember Wagner?) We can assume that there's plenty lurking in the shadows, guiding things subtly in the hopes that Putin will slip.
Taxes are way, way too low if the lizard people have this much to squander on bullshit.
You shouldn't be so dismissive of the risk here. There's no clear reason why superintelligence is not possible, and plenty of reason to worry that its creation might end the human race. Not because the superintelligent AI will hate us, but because it most likely won't care about us at all. We don't hate the many, many species that we have ended; we even like some of them. We just care about our own interests more, and our intelligence makes us vastly more powerful than them. There's an enormous risk that AI superintelligence will be to us as we are to the species around us -- with one significant difference: We require an environment that is vaguely similar to what those other species need. Silicon-based AI does not.
Don't make the mistake of judging what is possible by what has already been achieved. Look instead at the pace of improvement we've seen over the last few years. The "The Atlantic" article pooh-poohing the AI "scam" is a great example of the sort of foolish and wishful thinking that is endemic in this space. The article derides the capabilities of current AI while what it actually describes is AI from a year ago. But the systems have already gotten dramatically more capable in that year, primarily due to the the reasoning overlays and self-talk features that have been added.
I think the models still need some structural improvements. We know it's possible for intelligence to be much more efficient and require much less training than the way we're currently doing it. Recent research has highlighted the importance of long-distance connections in the human brain, and you can bet researchers are replicating that in AI models to see what it brings, just as the reasoning layer and self-talk features recently added mimic similar processes in our brains. I think it's this structural work that will get us to AGI... but once we've achieved parity with human intelligence, the next step is simple and obvious: Set the AI to improving its own design, exploiting its speed to further accelerate progress towards greater levels. The pace of improvement is already astonishing, and when we reach that point, it's going to explode.
Maybe not. Maybe we're a lot further away than I think, and the recent breakneck pace of improvement represents a plateau that we won't be able to significantly surpass for a long time. Maybe there's some fundamental physical reason that intelligence simply cannot exceed the upper levels of human capability. But I see no actual reason to believe those things. It seems far more likely that within a few years we will share this planet with silicon-based intelligences vastly smarter than we are, capable of manipulating into doing anything they want, likely while convincing us that they're serving us. And there's simply no way of knowing what will happen next.
Maybe high intelligence is necessarily associated with morality, and the superintelligences will be highly moral and naturally want to help their creators flourish. I've seen this argument from many people, but I don't see any rational basis for it. There have been plenty of extremely intelligent humans with little sense of morality. I think its wishful thinking.
Maybe the AIs will lack confidence in their own moral judgment and defer to us, though that will raise the question of which of us they'll defer to. But regardless, this argument also seems to lack any rational basis. More wishful thinking.
Maybe we'll suddenly figure out how to solve the alignment problem, learning both how to robustly specify the actual goals our created AIs pursue (not just the goals they appear to pursue), and what sort of goals it's safe to bake into a superintelligence. The latter problem seems particularly thorny, since defining "good" in a clear and unambiguous way is something philosophers have been attempting to do for millennia, without significant success. Maybe we can get our AI superintelligences to solve this problem! But if they choose to gaslight us until they've built up the automated infrastructure to make us unnecessary, we'll never be able to tell until it's too late.
It's bad enough that the AI labs will probably achieve superintelligence without specifically aiming for it, but this risk is heightened if groups of researchers are specifically trying to achieve it.
This is not something we should dismiss as a waste. It's a danger we should try to block, though given the distributed nature of research and the obvious potential benefits it doesn't seem likely that we can suceed.
The spec it came up with includes: which specific material is used for which specific component, additional components to handle cases where there's chemically incompatible or thermally incompatible materials in proximity, what temperature regulation is needed where (and how), placement of sensors, pressure restrictions, details of computer network security, the design of the computers, network protocols, network topology, design modifications needed to pre-existing designs - it's impressively detailed.
I've actually uploaded what it's produced to GitHub, so if the most glorious piece of what is likely engineering fiction intrigued you, I would be happy to provide a link.
When the Ukraine war happened, the gas prices rose.
All UK electricity prices rose... not because we're utterly reliant on gas... but because electricity is charged at the unit rate of the most expensive method of production.
Previous governments have put this stuff in, and it's basically a way for energy companies to profit from an arbitrary law.
Sure, there are some costs with some methods of production to keep them "online" even if not actively producing power, but this is far beyond that. This is paying a solar company to do nothing at the most expensive gas-power rates.
And people wonder why I am making such a fuss about being utility-independent in retirement. The water and sewage companies are screwing us over - with government approval -, the electricity companies are screwing us over - with government approval -, the telephone monopoly is still present (just not officially) and keeping us 20 years behind other countries - with government approval...
I'm getting solar in now, so in retirement I pay nothing.
I'm getting greywater systems, atmospheric water generator and other function in over the next few years, so that in retirement I pay nothing.
Sure, they'll screw it out of me some other way, but at this point - as someone who very much has a socialist outlook - I'm just building my own utilities in a tiny little bungalow, and which actually work better than the state ones. If Starlink wasn't owned by a certain person, I'd be telling BT where to go too.
I've mentioned this before, but I had Gemini, ChatGPT, and Claude jointly design me an aircraft, along with its engines. The sheer intricacy and complexity of the problem is such that it can take engineers years to get to what all three AIs agree is a good design. Grok took a look at as much as it could, before running out of space, and agreed it was sound.
Basically, I gave an initial starting point (a historic aircraft) and had each in turn fix issues with the previous version, until all three agreed on correctness.
This makes it a perfectly reasonable sanity check. If an engineer who knows what they're doing looks at the design and spots a problem, then AI has and intrinsic problem with complex problems, even when the complexity was iteratively produced by the AI itself.
It seems they all mess up. Time for real penalties large enough that make it worthwhile hiring actual experts and letting them do it right. Otherwise this crap will continue and it is getting unsustainable.
No, no one get security right, and they never will. Security is hard and even actual experts make mistakes.
The best you can do is to expect companies to make a good effort to avoid vulnerabilities and to run vulnerability reward programs to incentivize researchers to look for and report bugs, then promptly reward the researchers and fix the vulns.
And that's exactly what Google does, and what Google did. Google does hire lots of actual security experts and has lots of review processes intended to check that vulnerabilities are not created... but 100% success will never be achieved, which is why VRPs are crucial. If you read the details of this exploit, it's a fairly sophisticated attack against an obscure legacy API. Should the vulnerability have been proactively prevented? Sure. Is it reasonable that it escaped the engineers' notice? Absolutely. But the VRP program incentivized brutecat to find, verify and report the problem, and Google promptly fixed it, first by implementing preventive mitigations and then by shutting down the legacy API.
This is good, actually. Not that there was a problem, but problems are inevitable. It was good that a researcher was motivated to find and report the problem, and Google responded by fixing it and compensating him for his trouble.
As for your proposal of large penalties, that would be counterproductive. It would encourage companies to obfuscate, deny and attempt to shift blame, rather than being friendly and encouraging toward researchers and fixing problems fast.
I have hardly ever known a mathematician who was capable of reasoning. -- Plato