Not really-- this solution is even weirder. We've already been moving toward a "thin client" solution with everything running in web browsers, which actually runs a web application on the server. That's already a push toward the "thin client" model.
This then takes that and runs the browser on the sever, and streams the output locally. So it's like saying, "What if we emulate the thing client on the server too, and try to make a thinner client that just connects to the emulated thin client, which then connects to the server to run things."
If web browsers are such resource hogs that you can't even run the browser on your local computer, it's time to reevaluate what you're doing.
Not only do paid apps have a hard time competing with free apps, but apps that don’t come bundled with the OS are going to have a hard time competing with apps that are bundled with the OS.
It’s not just laziness. Bundled apps are generally assumed to be the “default app”. Users assume that it’s the safest, least problematic choice, and often enough they’re right. The fact that the app is bundled often means that it has some amount of OS integration that 3rd party apps won’t have. It may be, for example, that you can download a competing 3rd party app, but if you click on a link it’ll load the default app anyway.
So a bundled free app against a paid unbundled app? Why take the risk paying for something that’s probably not going to work well anyway?
Of course I think operating systems should generally not ship with applications beyond a bare minimum, in order to lessen this problem. If Apple wants to release a free podcast app, that’s all well and good but they shouldn’t have it installed on iOS by default, build iOS functions to assume it’s installed, or give it any priority whatsoever over other podcast apps. It’s the Internet Explorer problem all over again.
I don't know/understand the technical details of how he plans for things to work, but I think there are a couple of different concepts that people conflate:
Right now we can't really do any of those things, or at least private individuals can't do it easily and reliably. Some of those things are abilities for you to control your identity, and some are for you to get information about me. Some of those things aren't possible without another one, but some would be technically possible to do on their own.
For example, I don't think it'd be possible to allow me to verify/authenticate my real-life identity online for banking without also establishing some ability for me to authenticate my activity to an online identity. I can't make it possible for me to verify that to Twitter and Slashdot account that I really am John Smith from Denver without also making it possible to verify that my Twitter and Slashdot account are controlled by the same person. On the other hand, it'd be technically possible to do the opposite, and allow me to verify that my Twitter and Slashdot account are controlled by the same person (via certificate or SSO) without tying that identity to a real-life person named John Smith. In fact, you could still allow people to create multiple completely independent verified identities, and not link any of them to a real-life individual.
In fact, I'd argue that each of the capabilities that I listed should be made available to individuals, except for the last one. I should be able to establish any number of online identities, verify them across multiple sites and services, and if I choose verify any or all of them against a real-life identity. I should be able to do that easily, using open standards and protocols. Inherently, that opens the possibility of you tracking any one of those identities across sites and services.
However, we should always seek to prevent the last item that I listed, in order to preserve anonymity. The fact that I posted something here as nine-times shouldn't necessarily and automatically give you information about my real-life identity. I think making anonymity complete and absolute carries some danger, but we need to preserve the limited anonymity we currently have online, and perhaps expand it in some areas.
This gets at a thing that really frustrates me about computing, and the big example that sticks out for me is: There's basically nothing that I do on a computer today that I couldn't have done fine on Windows 2000. However, instead of spending the intervening years on making Windows 2000 clean and stable and secure and problem-free, they keep reskinning it and making it more complicated, more confusing, and harder to control.
Just quit it with the marketing and UI redesigns for a couple of years. Instead, talk to end-users and IT professionals about what's causing problems for them, and fix those things.
It seems like Linux developers have a tendency to do the same thing, spreading a lot of effort among a bunch of DEs constantly rejiggering their UIs rather than fixing long-standing important problems, though honestly I think Linux has done better. The experience of the Linux desktop has improved much more than the user experience in Windows, for example.
Honestly you wouldn't need AI doing fancy things to drastically reduce the need for IT personnel. All you'd need is better quality IT products. There's a lot of wasted work spent dealing with bugs, poor quality hardware and drivers, and terrible design choices. Too many developers and hardware vendors opt to create shoddy gimmicky products that don't work, and then IT has to spend hours and hours trying to make it work.
For example, I remember when iPhones first started making their way into the workplace, and it cut out a bunch of work for my department at the time. Instead of supporting crappy Blackberry and Windows devices, employees suddenly had a smartphone that was pretty reliable and easy for them to use, and didn't require a bunch of IT intervention. (I'm sure that example will be a little controversial, and someone will want to say "Nah, Blackberries were awesome and iPhones are stupid!" but this was my real-life personal experience and not an ideological argument about your feelings about Apple.)
If Microsoft would just fix their products and make them work sensibly, it'd cut out a lot of the things my department needs to work on and figure out.
Now there's also the question of what jobs improvements in IT are likely to eliminate. Better products would reduce the need for some technicians and support people, but I don't expect that products will get less stupid and gimmicky in the next 10 years. I fear they'll get worse. AI may improve monitoring and response, but you'll still need someone to evaluate the AI products, figure out which ones to use, make a business case for buying one, figure out how to implement it, and then keep track of it and troubleshoot areas where it doesn't do what it's supposed to.
And I think it's also worth noting that if you make an AI that can do good security monitoring and response, that may displace some low-level security monitoring employees, but the biggest impact will probably be to enable proper monitoring by companies who don't currently do it, or don't do it well. I think a lot of the AI coming in the next few years will do that sort of thing. It'll provide better security monitoring for companies who don't currently do a good job at security monitoring. It'll tag files with metadata that otherwise would require someone to manually assign, but for companies that wouldn't currently pay someone to sit around tagging files.
So you're right, I don't think IT workers should be concerned about AI replacing their jobs in general. AI may replace human work involved in clear and discrete tasks such as IT monitoring and real-time response, receiving calls and routing them, analyzing trends and generating reports, but in a broader sense I think we're safe. Not just because management is bad at understanding what they want, but because developers are terrible at building things. If Microsoft can't make Windows Update work reliably and without problems, what are the chances that they'll make an AI that can run whole IT departments without people in the loop? AI isn't that smart, and the businesses that are developing the AI aren't very smart either.
Actually both are suffering from the same phenomenon: hoarding by scammers. In both cases, bots buy as many as they can for the purpose of scalping. For video cards, there’s an additional “legitimate” demand for the purpose of cryptocurrency, which is itself another scam.
Nope, they’re rebuilding their apps in electron-based web applications, and it won’t just be Outlook. They’ve been telegraphing this for a while, and it’s part of the reason they switched Edge to use Chromium.
It sounds like, basically they’re working on improving the Electron integration with Windows so that applications will use shared libraries that are part of the OS, instead of each app using its own integrated web browser. I don’t know the technical details, but that’s the gist from what they were saying a couple of years ago.
Being this kind of web application makes it easy for them to develop cross-platform and have people have the same experience, whether they’re using the Mac or Windows or Linux application, or even using the web application in your browser. Teams, for example, is the same on each platform. VsCode is the same on every platform. The plan is for all of their Office applications to be like that. There will be a Mac app, Linux app, and Windows app, all identical to the web application that’s available from your browser. The difficulty they’re up against is making sure the new web-based versions have enough of the functionality of the full native app that their user base doesn’t rebel.
Yeah, I hate stuff like this.
Why a Teams button? Why not an Outlook button, or a File Explorer button? Is Microsoft so bad at building UIs that they can’t come up with a way to launch applications in software?
When I buy hardware that I intend on keeping for years, I don’t want the design to be determined by what Microsoft’s marketing team believes will make me use the product that they’re pushing this month. I want something generic and future-proof, so it’ll continue to be useful for the life of the product. I don’t want buttons on my monitor. Hell, if they could make all of those hardware controls for brightness and contrast controlled in software by the OS, I’d prefer that. Monitors are for display, not for buttons.
Only great masters of style can succeed in being obtuse. -- Oscar Wilde Most UNIX programmers are great masters of style. -- The Unnamed Usenetter