I'd said in my last journal entry that there were many technologies for solving specific problems, but limited documentation outside of a simple description.
To cover thus further, I'd like to list packet dropping schemes. Just the packet dropping schemes, no other elements of quality of service such as the queueing mechanism that packet dropping is used with. This will help exemplify why proper undersranding is important.
Partial Packet Discard (PPD)
Early Packet Discard (EPD)
Age Priority Packet Discarding (APPD)
Preemptive Partial Packet Discard (pPPD)
Tail Drop
Random Early Detect (RED)
Weighted Random Early Detect (WRED)
Adaptive Random Early Detect (ARED)
Robust Random Early Detect (RRED)
Random Early Detect with Preferential Dropping (RED-PD)
Controlled Delay (CoDel)
Blue
Global Random Early Estimation for Nipping (GREEN)
Multi Global Random Early Estimation for Nipping (M-GREEN)
PURPLE
BLACK
WHITE
CHOose and Keep (CHOKe)
CHOKe-FS
CHOKe-RH
P-CHOKe
CHOKeD
gCHOKe
And this isn't even close to exhaustive. But it's absolutely guaranteed that no OS supports anything but a small fraction of this list, and equally guaranteed that no developer out there knows which unimplemented schemes would be useful in typical environments for, say, Linux or FreeBSD.
I can also be absolutely certain that no researcher out there working on new schemes knows what is currently out there, when it is useful, or how best to tune it to get a fair understanding of what a new scheme would need to do.
This should give people some idea of the scope of the problem. You often don't see AQM in the enterprise world because there's too many options and nothingnon how to effectively use them.
Pretty much the same reason enterprise systems will use Ext4 or XFS, if running a Ubuntu variant of Linux - there will be far better filesystems for specific needs, but there's way too many options, it's too complex to set up if you don't know exactly what you're doing, and no insights into what need goes with what filesystem.
Even the use of AI won't help much - LLM AIs can't learn from data that isn't out there.
A simple Neural Net could be trained on a range of schemes and workloads and then generate advice on optimal setup, but if no researcher is doing anything more than a cursory comparison, then there's nobody in a position to create such an AI. And, even then, it's only useful for comparing what's there, it still won't help developers figure out what they need to add.
(Although, it would be a great boost to network admins if they could push a button and have an AI figure out the best setup for their servers and network gear.)