Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re: No. Just no. (Score 1) 124

Like Pascalâ(TM)s wager (somewhat odd, given Pascalâ(TM)s expertise) itâ(TM)s alsoâ¦exceptionally dubiousâ¦in its handling of probability. Not just the question of âwill hyper-computronium actually make that sort of simulation viable; and will there be enough of your state stored to simulate?â(TM); but the decision matrix itself.

Pascal just flattened it into âoebe theistâ/âoebe atheistâ; despite the fact that he was well aware of at least a handful of mutually incompatible ways to do theism, some of which have explicitly jealous gods; and itâ(TM)s trivial to construct as-plausible adversarial cases(eg. a god who rewards sincere believers, regardless of belief; while punishing those who are trying to work the system regardless of whether that happens to lead them to correct beliefs; which is essentially the worst possible god to wager with).

The Basilisk bros go on at greater length; but similarly flatten the decision matrix until it flatters their desired conclusion. Apparently no enthusiasts of I Have No Mouth and I Must Scream, because the possibility of the AI torturing you specifically out of displeasure with your culpability for its existence isnâ(TM)t a concern; nor are more indiscriminate Skynet types; or powerful but disengaged universal paperclips maximizing expert systems.

The wager was shoddy work, if at least provocative, even back when it was written; and it has aged poorly.

Comment Re:Not a useful addition. (Score 1) 36

*ignores sarcasm because, well, I have a 4 digit UID and get to.

I think everyone pretty much knew it already, the point is that I can measure the complexity at which AI breaks, which is quite different from merely qualitatively feeling that it doesn't work as well as the claims say.

Comment Re:No. Just no. (Score 2) 124

In reality, they can plan whatever they want but actual business needs will shape those ambitions. (Is Microsoft really intending to entirely abandon the server role? Nah.)

The phone, I think is much more likely to undergo a full transformation to something more like a "digital assistant" that acts in some ways like a human assistant.

But I also don't think 'computers as we know them' will disappear for what they have always been used for, and good for, and something like Windows must remain for that.

Comment Re: I'm surprised (Score 1) 40

Uh oh. I'm typing this on my new thinkpad, after switching from Dell latitude because I wasn't impressed with how it weathered over time. (In particular the screen hinge gave out and the speakers kept fizzing out)

Design-wise, I love the Thinkpad. I put it next to my old T400 and it's remarkably recognizable. That was one of the early Lenovo Thinkpads, and it still works.

The new one has a good-feeling keyboard, a decent number of ports, and an external LED to show its power state. It feels sturdy, not razor thin. I hope it turns out to be good long-term but you are making me second-guess.

Comment Re:Is the "AI" part actually relevant? (Score 3, Interesting) 40

There is a chicken-in-egg problem - which comes first, the apps or the hardware to make them runnable?

My hearing is increasingly bad so I use live transcription on my phone more and more. On the google pixel phones, this runs locally instead of having to upload the audio to google servers. This is made possible because it uses the on-phone NPU.

If you want you can argue that speech recognition is not AI, but having watched speech recognition remain a huge unsolved problem for decades until it was "solved" (well enough) by deep neural nets, I will disagree.

Apple's face recognition is another example.

Comment Re:Butlerian Jihad? (Score 1) 11

Agreed, "AI safety" is neither practical nor possible, in part because AI has no awareness or meaning, in part because AI has no capacity for introspection, but also because AI is only useful if it can handle hard questions and hard questions are, by their nature, not safe, and (as usual) because it would utterly destroy the entire economic model of the AI companies.

I can't find any obvious evidence that the guy really knows what "humanity-advancing" means, beyond advancing his own take on the world.

Comment Re:well yes (Score 1) 103

That's just plain untrue. You generally can't unilaterally alter the terms of a loan; but (potentially subject to certain penalties) you can just repay the loan on your preferred schedule if you can find someone offering a more favorable one you can use to do so; and, if the creditor agrees, you can just outright alter the terms of your previous agreement. They wouldn't really have any incentive to accept something worse than what they think the court will offer during a chapter 11 proceeding; but unless they really, really, want to prove a point it's entirely possible that they'd take terms worse than the original ones but better than the bankruptcy is likely to provide.

Obviously it costs a certain amount of money just to provide coffee and toner, plus the hourly cost of whoever will be discussing the matter, so you probably aren't going to 'renegotiate' some teeny little loan; especially if it's some sort of unsecured consumer credit, or situation where the finances of the debtor aren't entirely transparent or fixed(Kodak's ability to just stop buying materials from their suppliers without folding even faster is likely very limited and they were already presumably trying to get favorable prices; while Visa might suspect that you could spend less on groceries by going store-brand if they put the fear of collections hell into you); so a lot of consumer-tier 'refinancing' is strictly of the "pay off high interest credit card debt or usurious dealer vehicle loan with mortgage"/"pay off mortgage from higher interest period by taking out mortgage during low interest period); but a $440 million publicly traded company isn't really home economics territory.

Comment Not a useful addition. (Score 2) 36

Gemini still struggles with complex problems and large numbers of files. These, IMHO, should take priority over personalisation. I've mentioned in my journal that it is really struggling on anything that is non-trivial. Why should I care what it remembers if it is going to be used in engineering but can't solve engineering problems in areas where you'd actually want AI?

The benchtest I'm using is, yes, more complicated than figuring out how to wire up a Christmas lights display. On the other hand, it's also where it's going to get used and where it needs to work well.

Let's leave the pretty baubles to one side and actually get Gemini working well, OK?

User Journal

Journal Journal: Testing AIs

Ok, I've mentioned a few times that I tried to get AIs (Claude, Gemini, and ChatGPT) to build an aircraft. I kinda cheated, in that I told them to re-imagine an existing aircraft (the DeHavilland DH98 Mosquito) using modern materials and modern understanding, so they weren't expected to invent a whole lot. What they came up with would run to around 700 pages of text if you were to prettify it in LaTeX. The complexity is... horrendous. The organisation is... dreadful.

Comment Really? (Score 1) 86

If silicon valley were so concerned about the 'open' model gap isn't there something fairly obvious they could do?

It's not like the proprietary as-a-service-so-Sam-can-protect-you-from-terminators-and-stuff models remain as tightly gated SaaS stuff by accident. I'll believe that team VC is peevish about the speed with which their unbelievably massive volume of dubiously justified capex is being commodified; but don't even try to tell me that the state of the 'open' offerings is some kind of perturbing surprise to the people who could change it tomorrow by making their not-open offerings open if they felt like it.

Slashdot Top Deals

Polymer physicists are into chains.

Working...