Hello,
Real Example #1:
"We have a 5Mbps CIFS partition and XXX was saturating it so no one could print, use office applications (e.g. Word would hang trying to get printer parameters across the network), network shares, anything that uses CIFS basically. We asked the network profiling guys to investigate and they came back with XXX as the culprit."
Basically here they had an application that saturated the whole 5 Mbps of their Class of Service - The XXX application was made the root-cause, not how the queues were utilized.
Real Example #2:
Introduction of XXX application to "Reduce bandwidth utilization, typically by 60-95 percent, across all TCP-based applications. Free up bandwidth for other applications, like VoIP or Citrix, so they can perform better."
Here the problems for VoIP and Citrix are perceived in bandwidth, so instead of working to reduce latency, they implement TCP mangling gizmos to compress/coalesce/whatever traffic to reduce bandwidth used. Probably the good result they are having is not due to bandwidth use reduced but simply from the amount of packets on wire and less to manage on the droptail queues.
These 2 examples are real world IT Pros talking. Enterprise might be a huge pain to debloat!
Regards,
Maciej