I really really wood knot want to corner on wooden wheels. They'd turn to Ash and would be a Birch to fix.
*ignores sarcasm because, well, I have a 4 digit UID and get to.
I think everyone pretty much knew it already, the point is that I can measure the complexity at which AI breaks, which is quite different from merely qualitatively feeling that it doesn't work as well as the claims say.
The phone, I think is much more likely to undergo a full transformation to something more like a "digital assistant" that acts in some ways like a human assistant.
But I also don't think 'computers as we know them' will disappear for what they have always been used for, and good for, and something like Windows must remain for that.
Design-wise, I love the Thinkpad. I put it next to my old T400 and it's remarkably recognizable. That was one of the early Lenovo Thinkpads, and it still works.
The new one has a good-feeling keyboard, a decent number of ports, and an external LED to show its power state. It feels sturdy, not razor thin. I hope it turns out to be good long-term but you are making me second-guess.
My hearing is increasingly bad so I use live transcription on my phone more and more. On the google pixel phones, this runs locally instead of having to upload the audio to google servers. This is made possible because it uses the on-phone NPU.
If you want you can argue that speech recognition is not AI, but having watched speech recognition remain a huge unsolved problem for decades until it was "solved" (well enough) by deep neural nets, I will disagree.
Apple's face recognition is another example.
Agreed, "AI safety" is neither practical nor possible, in part because AI has no awareness or meaning, in part because AI has no capacity for introspection, but also because AI is only useful if it can handle hard questions and hard questions are, by their nature, not safe, and (as usual) because it would utterly destroy the entire economic model of the AI companies.
I can't find any obvious evidence that the guy really knows what "humanity-advancing" means, beyond advancing his own take on the world.
Gemini still struggles with complex problems and large numbers of files. These, IMHO, should take priority over personalisation. I've mentioned in my journal that it is really struggling on anything that is non-trivial. Why should I care what it remembers if it is going to be used in engineering but can't solve engineering problems in areas where you'd actually want AI?
The benchtest I'm using is, yes, more complicated than figuring out how to wire up a Christmas lights display. On the other hand, it's also where it's going to get used and where it needs to work well.
Let's leave the pretty baubles to one side and actually get Gemini working well, OK?
Ok, I've mentioned a few times that I tried to get AIs (Claude, Gemini, and ChatGPT) to build an aircraft. I kinda cheated, in that I told them to re-imagine an existing aircraft (the DeHavilland DH98 Mosquito) using modern materials and modern understanding, so they weren't expected to invent a whole lot. What they came up with would run to around 700 pages of text if you were to prettify it in LaTeX. The complexity is... horrendous. The organisation is... dreadful.
Polymer physicists are into chains.