Batman's a scientist!
"When I ask my other tech friends what they would do, they simply suggest changing ISPs. Nobody likes Comcast, but I don't have a choice here. I'm two years into a three-year contract. So, moving is not an option"
Moving is always an option. But you have to eat the cost of one year of Comcast. Sorry, but that's your solution.
So, I checked slashdot on my phone today over lunch, and I saw the big "We hear you!" post discussing beta. Then, I got home tonight and was redirected to the new beta interface. So, clearly, slashdot the corporate group doesn't hear what slashdot the community is saying. If people are still being involuntarily redirected to something that has put the community at the edge of open rebellion, slashdot is clearly plunging in relevance even faster than a post Gox bitcoin. It's been a good run. I had over a decade of fun here on slashdot. I had excellent karma. But, clearly it's time for me to walk away. It's a shame that that
I count that as wise. If you put a real IP address, it would likely get a lot of traffic.
Which is why I've always been confused by the fact that they use fictituous IP's, rather than a production company website with trailers for upcoming projects...
It is a lot of work to raise your arm and point at an exact location on the screen (and slow too). After a short time you will be feeling the fatigue building up in your arm, which starts feeling very heavy. Then you will hate your touch screen and go back to using a mouse, touchpad, or keyboard, none of which require you to make large arm movements, or hold up the weight of your arm in front of you.
Why is touch on the desktop always assumed to be something that would have to replace using other inputs? I mean, if touch added $5 to my monitor, and I used it once every few weeks, I'd consider that a win. And, if it were widely deployed, economies of scale would mean that it really would be very cheap to add. (Like audio on the motherboard.) Having things like pinch to zoom could be handy on the desktop.
Way easier to toss a condom than to clean a sex bot. Just sayin'.
instead, they ran rampant and now we have a bullshit system which even on my system, sometimes fails...chrome doesnt play audio, firefox does...no idea why...although getting my HDMI tv to play sound on fedora was interesting, the eventual solution was I had to edit a file in
/usr/share and add a :0 to the end of one of the parameters...I have no idea why....in linux mint it was fixed and I never had to do it...but weird shit like this seems to happen all the time...
Despite my best efforts, with Chrome on Ubuntu, Some YouTube videos will play out of one sound card, and some videos will play out of another. I think it's Flash vs. HTML5 being used for different videos. Seriously, it's the most bewildering user experience to have to randomly switch between my USB headphones and my analog headphones. Getting bluetooth audio working reliably is just a lost cause. Skype used to work. I apparently broke it in the course of trying to fix other things. 10 years professional experience as a UNIX admin, and I can't figure out how to make Youtube work without wearing two different headphones. It's sort of fucked.
Slashdot's terrible at interviews. Hopefully somebody much more qualified would interview them, and then amonth later slashdot would post a link to it several times.
Well, if he has identified it as taking up a large amount of the available bandwidth, then it certainly makes sense to consider it a target for reductions. Perhaps more importantly, users tend not to care about updates like that. A user actively downloading a file from some source is probably more important than some automated process the user doesn't care about, and can be deferred until the user gets home without them noticing anything.
That said, I've been saying for a while that there needs to be some sort of bandwidth discovery protocol. My original thought process was driven by apps on mobile phones, but this seems like it would benefit for the same reasons. Wireless oeprators are always concerned about using scarce bandwidth resources so we get plans with low data caps and such. Imagine if there was a completely standardised way for an application (say an email app on a phone) to "ping" bandwidthdiscovery://mail.foo.com with some sort of priority metric. If nothing responded back, it would act normally, so the system would be completely backwards compatible. If something did respond back along the route (for example, the wireless ISP you are connected to, but it could theoretically be something local or distant. The school's DDWRT router in the OP example.) it could reject the session, or encourage a delay. That way an email app set to check every 5 minutes could occasionally get a polite rejection from the ISP asking the app to hold off since circuits are overloaded. The phone would then wait a few minutes before trying again. Eventually the phone would download new email, but at high traffic times, it might wind up going 15 minutes instead of 5, saving the network some trouble. Software updates might defer a download for days or weeks if there is a continual rejection.
My Android phone lets me set software updates and podcast downloads to only happen over wifi, under the assumption that cellular data is expensive, but wifi data is unlimited. But, if I connect to a Mifi access point connected to a cellular connection, my phone currently has no way to discover that it is actually using (limited) cellular data. With a bandwidth discovery protocol, it would get the same rejections from the ISP that it would get if it had directly connected to the cellular data itself. And, local admins could easily set up rejection rules like the OP would be interested in, while still allowing the possibility of user overrides in cases where the school IT guy really wants to manually update the school's computer systems and whatnot. Think of it as a sort of queryable QoS.
And because any intermediate system on the route can let apps know to reduce bandwidth usage, a server being slashdotted can have some queries be rejected, rather than everything being on the link local side near the user. Obviously, none of this helps the admin in the immeadiate term. But, it would seem like that's how it ought to work.
Implicit semicolons. '5' + 3 gives '53' whereas '5' - 3 gives 2. I tried to include the famous Javascript truth table. Look it up. Including it in the post just triggered the junk filter, but it's hilarious. Javascript manages to be chock full of wtf even without the DOM at all. I always wished that Python would show up in the browser at some point. Once apon a time, the idea of genuinely novel scripting languages for web pages actually seemed plausible. (Remember vbscript web pages?) I guess there is so much legacy JS now that it's just the way things work and we'll never be completely rid of it.
And then you need to duplicate the whole thing in another datacenter for geographical redundancy.
Useful for some workloads, sure. But if it is an internal service, rather than something like a website (gasp, not all servers are public facing websites) then if my office gets taken out by a meteorite, none of the corpses in the building actually care about whether or not some instance of the service exists in some other safer geographic region.
The flip side is that at a small scale, you get a certain amount 'for free.' If you need to have some infrastructure locally, then you already have some sort of a room with space to put a new server in, you already have sufficient electricity. You already have a guy to replace a blown hard drive. The extra time he spends replacing it is technically nonzero, but it's a fairly rare event, so a single extra server tends to be "in the noise." The big cost is as soon as you exhaust your existing capacity. I.E. The guy is already replacing drives full time, so adding one more server will mean needing to add another full time guy. Or, all the racks are full and you will need to add additional space. You can see a point where the TCO of the last server was genuinely much less than outsourced infrastructure, but the TCO of the next server will effectively be $500000 if you only add one more machine.
It's hardly sudden. Developers have spoken about an algorithm being compute/bandwidth/io/memory bound for at least decades.
Emulating a piece of hardware with another piece of hardware in software is always slow. I remember when you needed a fairly beefy PC to play emulated NES games effectively. If you think that emulating a current console on a PC will never be practical, given that they are essentially just PC's themselves now, then you attention span is too short to have bothered reading this far into my comment, so I'm not entirely sure why I bothered.
Parkinson's Law: Work expands to fill the time alloted it.