<p dir="ltr">To add my tuppence to this discussion:</p>
<p dir="ltr">I don't believe real world topologies and workloads are incompatible with academic experimentation. All you have to do is replicate the same workload and other conditions across each of the qdiscs you're testing, ie to change only the qdisc between test runs. That's the basis of the scientific method.</p>
<p dir="ltr">Using a simplified topology and workload may well give you clearer and more understandable results from which you can more easily draw a conclusion for your paper. But that conclusion becomes correspondingly less relevant to practical applications and bleeding edge research, as others have explained.</p>
<p dir="ltr">In the real world, I have a 3G connection shaped to half a megabit at the provider, occasionally limited to EDGE speeds at the tower depending on propagation conditions, typical idle latency 100ms, potential loaded latency the best part of a MINUTE. I'm not even sure what the upload bandwidth rate or topology is supposed to be. Packet loss is usually acceptably low, yet the horrible loaded latency means that I can't do more than one thing at a time on my phone; it even logs me out of Steam chat if I load a big web page, and it also takes time to drain the traffic out of the queue after I cancel something to do something else, or to do it a different way.</p>
<p dir="ltr">Just now I wanted to watch an NTSB media briefing; Twitter tried to load a preview of the video just before I clicked the link to launch the proper YouTube app (which can do fullscreen); with the buffer thus filled, YouTube decided the network was broken and refused to load the video. In fact it left the navigation settings on the previous video I watched yesterday, so that when I hit retry, it loaded that video rather than the one I'd just clicked the link for. That's a real world workload, and a real world failure of epic proportions, and it didn't even need bidirectional traffic to trigger.</p>
<p dir="ltr"> - Jonathan Morton<br>
</p>