From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-yh0-x22e.google.com (mail-yh0-x22e.google.com [IPv6:2607:f8b0:4002:c01::22e]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id ECB3921F347 for ; Sun, 21 Jun 2015 09:19:31 -0700 (PDT) Received: by yhpn97 with SMTP id n97so95020390yhp.0 for ; Sun, 21 Jun 2015 09:19:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=eXAiGvsLSuQ5tPQE+yvlQYIko6viTgBJps5Du18Nfag=; b=EzpRLAshgF2JyWaBzjWSRH9Dib404yIl0PY3W/wQ6WbTSLqlpkcXsnBBm3Tlt0azpa f8kW9fPEuPPJP7s1AuzMiDwx0SCd4CK3S8Q5KZeWwUWHRoOIwhOLAM2UACSBx+wmj1jq IDSKbyGPD9PcQYJQa4Yuwx97uqNX/4o4ixj/VzEbIFNGg55OdbQxI+vSE+9mpX0xpOm4 iuw7clnsHffM1OIag+Um12Ycee6tY0eQaTYwRJdiAqtShA6opLSYevT8vNH74nK2GM8g cgjDpv82BKucuFAaNKLPPsg+M3dm4NLS56Du6/hPE0/vOZgY/mR6GgMyzn4bKB3n8RPI mtFQ== MIME-Version: 1.0 X-Received: by 10.170.42.85 with SMTP id 82mr31927780ykk.18.1434903570481; Sun, 21 Jun 2015 09:19:30 -0700 (PDT) Received: by 10.129.148.194 with HTTP; Sun, 21 Jun 2015 09:19:30 -0700 (PDT) Date: Sun, 21 Jun 2015 11:19:30 -0500 Message-ID: From: Benjamin Cronce To: bloat Content-Type: multipart/alternative; boundary=001a1139083c9b0956051909855d Subject: [Bloat] TCP congestion detection - random thoughts X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 21 Jun 2015 16:20:00 -0000 --001a1139083c9b0956051909855d Content-Type: text/plain; charset=UTF-8 Just a random Sunday morning thought that has probably already been thought of before, but I currently can't think of hearing it before. My understanding of most TCP congestion control algorithms is they primarily watch for drops, but drops are indicated by the receiving party via ACKs. The issue with this is TCP keeps pushing more data into the window until a drop is signaled, even if the rate received is not increased. What if the sending TCP also monitors rate received and backs off cramming more segments into a window if the received rate does not increase. Two things to measure this. RTT which is part of TCP statistics already and the rate at which bytes are ACKed. If you double the number of segments being sent, but in a time frame relative to the RTT, you do not see a meaningful increase in the rate at which bytes are being ACKed, may want to back off. It just seems to me that if you have a 50ms RTT and 10 seconds of bufferbloat, TCP is cramming data down the path with no care in the world about how quickly data is actually getting ACKed, it's just waiting for the first segment to get dropped, which would never happen in an infinitely buffered network. TCP should be able to keep state that tracks the minimum RTT and maximum ACK rate. Between these two, it should not be able to go over the max path rate except when attempting to probe for a new max or min. Min RTT is probably a good target because path latency should be relatively static, however path free-bandwidth is not static. The desirable number of segments in flight would need to change but would be bounded by the max. Of course naggle type algorithms can mess with this because when ACKs occur is no longer based entirely when a segment is received, but also by some other additional amount of time. If you assume that naggle will coalesce N segments into a single ACK, then you need to add to the RTT, the amount of time at the current PPS, how long until you expect another ACK assuming N number of segments will be coalesced. This would be even important for low latency low bandwidth paths. Coalesce information could be assumed, negotiated, or inferred. Negotiated would be best. Anyway, just some random Sunday thoughts. --001a1139083c9b0956051909855d Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Just a random Sunday morning thought that has probabl= y already been thought of before, but I currently can't think of hearin= g it before.

My understanding of most TCP cong= estion control algorithms is they primarily watch for drops, but drops are = indicated by the receiving party via ACKs. The issue with this is TCP keeps= pushing more data into the window until a drop is signaled, even if the ra= te received is not increased. What if the sending TCP also monitors rate re= ceived and backs off cramming more segments into a window if the received r= ate does not increase.

Two things to measure this.= RTT which is part of TCP statistics already and the rate at which bytes ar= e ACKed. If you double the number of segments being sent, but in a time fra= me relative to the RTT, you do not see a meaningful increase in the rate at= which bytes are being ACKed, may want to back off.

It just seems to me that if you have a 50ms RTT and 10 seconds of bufferb= loat, TCP is cramming data down the path with no care in the world about ho= w quickly data is actually getting ACKed, it's just waiting for the fir= st segment to get dropped, which would never happen in an infinitely buffer= ed network.

TCP should be able to keep state that = tracks the minimum RTT and maximum ACK rate. Between these two, it should n= ot be able to go over the max path rate except when attempting to probe for= a new max or min. Min RTT is probably a good target because path latency s= hould be relatively static, however path free-bandwidth is not static. The = desirable number of segments in flight would need to change but would be bo= unded by the max.

Of course naggle type algorithms= can mess with this because when ACKs occur is no longer based entirely whe= n a segment is received, but also by some other additional amount of time. = If you assume that naggle will coalesce N segments into a single ACK, then = you need to add to the RTT, the amount of time at the current PPS, how long= until you expect another ACK assuming N number of segments will be coalesc= ed. This would be even important for low latency low bandwidth paths. Coale= sce information could be assumed, negotiated, or inferred. Negotiated would= be best.

Anyway, just some random Sunday thoughts= .
--001a1139083c9b0956051909855d--