Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Submission + - A bufferbloat-less christmas (blogspot.com)

mtaht writes: Inside the lede-project, two core new bufferbloat-fighting techniques are poised to enter the linux mainline kernel and thousands of routers — the first being a fq-codel'd and airtime fair scheduler for wifi, and the second, the new "cake" qdisc, which outperforms fq_codel across the board for shaping inbound and outbound connections.

It's been nearly 6 years since the start of the bufferbloat project. Have you or has your ISP fixed your bufferbloat yet?

Comment Dan Geer is a founder of computer security. (Score 1) 118

First: In-Q-Tel is the venture capital arm of all of the U.S. intelligence services, including DHS, FBI, etc; not just CIA. DHS, for example, will be blamed for any big security disaster; you should not presume that the motives of the agencies are uniform. Nor is all of what those agencies do bad.... It's the pervasive surveillance we *must* stop, and compromising our security standards. See: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.iqt.org%2Fabout-iqt%2F for In-Q-Tel rather than the Wikipedia entry for Dan.

Second: Dan has never taken a security clearance, over his entire career.

Third: He's actually not a In-Q-Tel employee, but a consultant (full time) for them. This is so that he does *not* have to sign a employee agreement, but can remain able to speak freely. Which he does regularly: See http://geer.tinho.net/pubs for some of his publications. One I sparked him to write recently is: http://geer.tinho.net/geer.lawfare.15iv14.txt in reaction to the information I cover in my Berkman Center talk you can find at: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fcyber.law.harvard.edu%2Fevents%2Fluncheon%2F2014%2F06%2Fgettys

Fourth: people who know Dan, who is really one of the founders of the computer security field, hold him in very high regard and trust, as I do.

If you look at Dan Geer's career, rather than jumping to unfounded, ill informed presumptions based on news reports that don't bother to go beyond reading the Wikipedia entry, you will find:
    1) he managed the development of Kerberous at Project Athena (where I got to know him)
    2) he co-authored the famous Microsoft is a dangerous monoculture paper a bit over a decade ago (which Microsoft hated so much they
          got @Stake to fire him.
    3) he is a holder of the USENEX Flame award https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.usenix.org%2Fabout%2Fflame

In short, guys, he's one of "us"....

Don't be ill-informed slashdotters....

Comment CDE was dead on arrival (Score 0) 263

The day that CDE finally appeared (badly late) on my workstation was the day that I knew there was no hope for the UNIX desktop. Design by committee never works, and it was a camel of 5 humps.

Jim Gettys

Comment Don't volunteer on broadband... (Score 1) 160

Since all broadband connections have bufferbloat (to some degree or other), in all technologies (fiber, DSL and cable alike), it isn't a good idea to volunteer to run an NTP server on such a connection, even if it is/has been reliable. Bufferbloat will induce transient bad timing into your time service; even more fun, in often a asymmetric way, pretty much any time you do anything over that link.
                                                                    - Jim

Comment Re:Paper is ambiguous about what gets dropped (Score 1) 134

AQM's don't usually look at the contents of what they drop/mark.

We expect CoDel to be running on any bulk data queue; voip traffic, properly classified, would be in a independent queue, and not normally subject to policing by a CoDel.

While 10 years ago, a decent AQM like CoDel might have been able to get latencies down for most applications where they should be, browser's abuse of TCP, in concert with hardware features such as smart nics that send line rate bursts of packets from single TCP streams has made me believe we must also do fair queuing/classification to get the latencies (actually, jitter) where they need to be due to these "bursts" of packets arriving.

Comment Re:Paper is ambiguous about what gets dropped (Score 2) 134

The article's subtitle is: "A modern AQM is just one piece of the solution to bufferbloat." We certainly expect to be doing fair queuing and classification in addition to AQM in the edge of the network (e.g. your laptop, home router and broadband gear). I don't expect fair queuing to be necessary in the "core" of the network.

I'll also say that an adaptive AQM is an *essential* piece of the solution to bufferbloat, and a piece we've had no good solution to (until, we think, now).

That's why this article represents "fundamental progress".

Comment Re:oversimplified PR noise ignores decade of resea (Score 5, Interesting) 105

You are correct that replacing one bad constant with another is a problem, though I certainly argue many of our existing constants are egregiously bad and substituting a less bad one makes the problem less severe: that is what the cable industry is doing this year in a DOCSIS change that I hope starts to see the light of day later this year. That can take bloat in cable systems down by about an order of magnitude, from typically > 1 second to of order 100-200ms; but that's not really good enough for VOIP to work as well as it should. The enemy of the good is the perfect: I'm certainly going to encourage obvious mitigation such as the DOCSIS changes while trying encourage real long term solutions, which involve both re-engineering of systems and algorithmic fixes. There are other places where similar "no brainer" changes can help the situation.

I'm very aware of the research over a decade old, and the fact that what exists is either *not available* where it is now needed (e.g. any of our broadband gear, our OS's, etc.), and *doesn't work* in today's network environment. I was very surprised to be told that even where AQM was available, it was often/usually not enabled, for reasons that are now pretty clear: classic RED and derivatives (the most common available) require manual tuning, and if untuned, can hurt you. As you, I had *thought* this problem was a *solved* problem in the 1990's; it isn't....

RED and related algorithms are a dead end: see my blog entry on the topic: http://gettys.wordpress.com/2010/12/17/red-in-a-different-light/ and in particular the "RED in a different light" paper referenced there (which was never formally published, due to reasons I cover in the blog posting). So thinking we just apply what we have today is *not correct*; when Van Jacobson tells me RED won't hack it (which was originally designed by Sally Floyd and Van Jacobson) I tend to believe him.... We have an unsolved research problem at the core of this headache.

If you were tracking kernel changes, you'd see "interesting" recent patches to RED and other queuing mechanisms in Linux; this shows you just how much such mechanisms have been used, that bugs are being found in this day and age in such algorithms in Linux: in short, what we have had in Linux has often been broken, showing little active use.

We have several problems here:
      1) basic mistakes in buffering, where semi-infinite statically sized buffers have been inserted in lots of hardware/software. BQL goes a long way toward addressing some of this in Linux (the device driver/ring buffer bufferbloat that is present in Linux and other operating systems).
      2) variable bandwidth is now commonplace, in both wireless and wired technologies. Ethernet scales from 10Mbps to 10 or 40Gps.... Yet we've typically had static buffering, sized for the "worst case". So even stupid things like cutting the buffers proportionately to the bandwidth you are operating at can help a lot (similar to the DOCSIS change), though with BQL we're now in a better place than before.
      3) the need for an AQM that actually *works* and never hurts you. RED's requirement for tuning is a fatal flaw; and we need an AQM that adapts dynamically over orders of magnitude of bandwidth *variation* on timescales of tens of milliseconds, a problem not present when RED was designed or most of the AQM research of the 1990's done. Wireless was a gleam in people's eyes in that era.

I'm now aware of at two different attempts at a fully adaptable AQM algorithms; I've seen simulation results of one of those which look very promising. But simulations are ultimately a guide (and sometimes a real improving insight): running code is the next steps, and comparison with existing AQM's in real systems. Neither of these AQM's have been published, though I'm hoping to see either/both published soon and their implementation happening immediately thereafter.

So no, existing AQM algorithms won't hack it; the size of this swamp is staggering.
                                                                                                                  - Jim

Cloud

Google Close To Launching Cloud Storage 'Google Drive' 205

MrSeb writes with this selection from ExtremeTech: "Why doesn't Google offer a cloud storage service to rival Dropbox, Box.net, or Microsoft's SkyDrive? Google has the most internet-connected servers in the world, the largest combined storage of any web company, and already offers photo storage (Picasa), document storage (Docs), music storage (Music), but for some reason it has never offered a unified Google Drive. According to people familiar with the matter, however, our wait is almost over: Google's Hard Drive In The Sky is coming soon, possibly 'within weeks.' Feature-wise, it sounds like Google Drive will be comparable to Dropbox, with free basic storage (5GB?) and additional space for a yearly fee."
Transportation

DC Comics Prevails In Batmobile Copyright Dispute 115

think_nix writes "Wired reports of U.S. District Judge Ronald S. W. Lew siding with DC Comics in the federal copyright court case against Gotham Garage owner Mark Towle. DC accused Towle of selling 'unlicensed replica vehicle modification kits based on vehicle design copyrights from plaintiff's Batman property, including various iterations of the fictional automobile, the Batmobile.' Lew noted that 'DC Comics pleads sufficient facts to support its allegations. Although, generally copyright law does not apply to "useful articles" such as autos.'"
Communications

Email Offline At the Home of Sendmail 179

BobJacobsen writes "The UC Berkeley email system has been either offline, or only providing limited access, for more than a week. How can the place where sendmail originated fall so far? The campus CIO gave an internal seminar (video, slides) where he discussed the incident, the response, and some of the history. Briefly, the growth of email clients was going to overwhelm the system eventually, but the crisis was advanced when a disk failure required a restart after some time offline. Not discussed is the long series of failures to identify and implement the replacement system (1, 2, 3, 4). Like the New York City Dept. of Education problem discussed yesterday, this is a failure of planning and management being discussed as a problem with (inflexible) technology. How can IT people solve things like this?"

Comment Re:Lag-o-Meter-of-Internet-Doom (Score 1) 124

There are significant networks that do not look like the consumer edge Internet, one of which reportedly collapsed in a nasty way (not necessarily in the same way as 1986). Don't presume that the network that may collapse is the global internet (though time based self synchronizing phenomena are a worry there). One of the functions of AQM algorithms is to ensure that TCP flows don't synchronize their behavior. And those AQM algorithms are MIA on many networks today.

Those of us who lived through the 1986 congestion collapse are somewhat worried.

That we don't know is what worries us, as we're flying in an area we don't fully understand.

Comment Re:Doing it wrong, again (Score 2) 124

To solve this, I think we need both AQM fully at the edge of the network, and some sort of "fair" queueing at the edge. The headache is that the classic AQM algorithms won't work in the edge case (and are flawed). "Fairness" is in the eyes of the beholder and a complex question. You may consider it "fair" your kids get half the bandwidth you do, for example. But having a situation where talking to a local CDN 10ms away gets tons more bandwidth than something across the globe may not be "fair" in your view, nor that your web browser can put a horrible transient into the queues as it opens 10 TCP connections simultaneously, and it's unfair what it does to other people sharing your home circuit.

1) Some sort of AQM is necessary to tell the end points to slow down in a timely fashion (by signalling the end point TCP's). Or the buffers fill, and stay full, and you are in fundamental trouble. Consider this the "high order" bit to solving the problem.
2) TCP does not guarantee any "fairness" between flows of different RTT's, Note that what you consider "fair" inside your house is your business: your ISP has some obligation around "fairness" between customers. This is the next bit of precision. Anyone who thinks TCP is "fair" by itself believes in magic that does not exist.

Note these two issues might be addressed by one algorithm, or multiple algorithms, and what is best might be different in a host than in your home router, and different yet again elsewhere in the network. Exactly what we need (and will work well) is as yet unknown, though almost anything (even going back to semi-sane buffer sizes selected with even trivial amounts of thinking) can improve things a lot. And getting there is going to require analysis and testing. I encourage people to help out: for example, we've not yet really played with SFB to see how it behaves in the face of variable bandwidth. Classic RED won't work in a sane fashion at all in that case.

But hugely oversized, dumb, unmanaged, tail-drop buffers have got to go.

Thankfully, at the edge of the network (your host and home router), we have lots more cycles to play with per packet than in a core router and can burn some of them well for this purpose. And that's where we're almost always congested (your ISP may have other congestion problems, but at the aggregation level of those devices, classic AQM algorithms can function, even if maybe not ideally).

The article that was posted yesterday is the short CACM article; Kathy and I have a much longer paper in preparation that will go into this in more detail. But CACM has a word count limit we could not meet, and the "fairness" topic is mostly on the cutting room floor, and now being picked up and finished.
I wasn't expecting the Queue posting to go "live" until next month (i was being naive), so the long paper is not finished. CACM really wanted to lead off January with a bang, so redirected our efforts toward the short, and in many ways incomplete, discussion.

Slashdot Top Deals

The generation of random numbers is too important to be left to chance.

Working...