I've read most of the threads; some good knowledge out there. As said dozens of times, very difficult to pinpoint troubles.
I mean, consider the architectures: Last Mile terminates inside a carrier's pop. From there on out, its kind of a crap-shoot. Even peering links get saturated (drops occur), even go down, in which case re-routes come into play. The internet's advertised functionality is a survivable cloud, where re-routes fix saturated or downed links. In principle.
Working for tw telecom for a few years in their noc as a repair tech, outages and the like impact traffic in crazy ways. Without direct knowledge of what links are running at capacity, or what outages are going on it is just so hard to really maintain path integrity. And that's for enterprise customers, who pay through the nose for SLA.
Residential broadband is at the bottom of the priority list, virtually always Best Effort. Which means exactly what it says. Factor in the corollary effects of *cough* net neutrality --it is my hope Ajit Pai can never take a cup of coffee in public again without risk of a beating-- we are indeed in a brave new world of throughputs and L2/L3traffic classifications.
It is easy to rail at the ISP, but unless as a user you contract for SLA, and in specific SLA to that which is important to you, the Interweb will ALWAYS be a crap-shoot. The miracle is that it works as well as it does.