On Sun, Feb 5, 2012 at 2:20 PM, Dave Taht wrote: > > > On Sun, Feb 5, 2012 at 7:39 AM, Eric Dumazet wrote: > >> Le dimanche 05 février 2012 à 02:43 +0200, Jonathan Morton a écrit : >> > On 5 Feb, 2012, at 2:24 am, George B. wrote: >> > >> > > I have yet another question to ask: On a system where the vast >> > > majority of traffic is receive traffic, what can it really do to >> > > mitigate congestion? I send a click, I get a stream. There doesn't >> > > seem to be a lot I can do from my side to manage congestion in the >> > > remote server's transmit side of the link if I am an overall >> > receiver >> > > of traffic. >> > > >> > > If I am sending a bunch of traffic, sure, I can do a lot with queue >> > > management and early detection. But if I am receiving, it pretty >> > much >> > > just is what is and I have to play the stream that I am served. >> > >> > There are two good things you can do. >> > >> > 1) Pressure your ISP to implement managed queueing and ECN at the >> > head-end device, eg. DSLAM or cell-tower, and preferably at other >> > vulnerable points in their network too. >> >> Yep, but unfortunately many servers (and clients) dont even >> initiate/accept ECN >> >> > 2) Implement TCP *receive* window management. This prevents the TCP >> > algorithm on the sending side from attempting to find the size of the >> > queues in the network. Search the list archives for "Blackpool" to >> > see my take on this technique in the form of a kernel patch. More >> > sophisticated algorithms are doubtless possible. >> > >> You can tweak max receiver window to be really small. >> >> # cat /proc/sys/net/ipv4/tcp_rmem >> 4096 87380 4127616 >> # echo "4096 16384 40000" >/proc/sys/net/ipv4/tcp_rmem >> >> A third one : Install an AQM on ingress side. >> >> Basically you can delay some flows, so that TCP acks are also delayed. >> >> Example of a basic tc script (probably too basic, but effective) >> >> ETH=eth0 >> IFB=ifb0 >> LOCALNETS="hard.coded.ip.addresseses/netmasks" >> # Put a limit a bit under real one, to 'own' the queue >> RATE="rate 7Mbit bandwidth 7Mbit maxburst 80 minburst 40" >> ALLOT="allot 8000" # Depending on how old is your kernel... >> >> > A subtlety here is that several technologies in use today > (wireless-n, cable, green ethernet, GRO) are highly 'bursty', > and I'd regard minburst, maxburst as something that needs to be calculated > as a respectable fraction of the underlying rate. > > > >> modprobe ifb >> ip link set dev $IFB up >> >> tc qdisc add dev $ETH ingress 2>/dev/null >> >> tc filter add dev $ETH parent ffff: \ >> protocol ip u32 match u32 0 0 flowid 1:1 action mirred egress \ >> redirect dev $IFB >> >> tc qdisc del dev $IFB root >> >> >> # Lets say our NIC is 100Mbit >> tc qdisc add dev $IFB root handle 1: cbq avpkt 1000 \ >> rate 100Mbit bandwidth 100Mbit >> >> tc class add dev $IFB parent 1: classid 1:1 cbq allot 10000 \ >> mpu 64 rate 100Mbit prio 1 \ >> bandwidth 100Mbit maxburst 150 avpkt 1500 bounded >> >> # Class for traffic coming from Internet : limited to X Mbits >> tc class add dev $IFB parent 1:1 classid 1:11 \ >> cbq $ALLOT mpu 64 \ >> $RATE prio 2 \ >> avpkt 1400 bounded >> >> tc qdisc add dev $IFB parent 1:11 handle 11: sfq >> >> >> # Traffic from machines in our LAN : no limit >> for privnet in $LOCALNETS >> do >> tc filter add dev $IFB parent 1: protocol ip prio 2 u32 \ >> match ip src $privnet flowid 1:1 >> done >> >> tc filter add dev $IFB parent 1: protocol ip prio 2 u32 \ >> match ip protocol 0 0x00 flowid 1:11 >> >> >> >> _______________________________________________ >> Bloat mailing list >> Bloat@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/bloat >> > > > > -- > Dave Täht > SKYPE: davetaht > US Tel: 1-239-829-5608 > FR Tel: 0638645374 > http://www.bufferbloat.net > -- Dave Täht SKYPE: davetaht US Tel: 1-239-829-5608 FR Tel: 0638645374 http://www.bufferbloat.net