We put a ton of money and engineering into reducing network latency well below 0.1 seconds. For stuff on the scale of Facebook, for instance, latency is addressed by moving data to where it's likely to be used next. (If you go on a trip to another continent, Facebook will copy your profile data to a datacenter on that continent.) In low Earth orbit, the satellite will make a complete orbit in just a couple hours, so you'd be needing to move data constantly. Or you'd eat the latency of moving data from the far side of the world.
If you want to use geostationary orbit to handle that problem, you've got as much latency as your datacenter being halfway across the world, because information moves through copper wire about two thirds as fast as the speed of light. If that were generally considered acceptable, today you'd see most companies just putting datacenters wherever it's convenient and not worrying about getting close to the end user.
Your orbital datacenters need to host applications that either require little local storage (so it doesn't matter which satellite you use) or are very insensitive to latency (so it doesn't matter that you're connecting to a satellite that's 35,000km away).
Generative AI and some types of supercomputing might fit the bill. But both are power-hungry and therefore cooling-hungry. Cooling stuff in space is hard.
Schmidt is letting his sci-fi imagination run wild and ignoring practical realities.