<div dir="ltr">Hi Jonathan,<div><div><div><br></div><div>Yes, we are using TCP NewReno at the moment.</div><div><br></div></div><div>There was a typo in labeling the Y-axis; instead of "Throughput" it should be "Link Utilization" in the following graphs (now corrected):</div><div><br></div><div><a href="https://github.com/Daipu/COBALT/wiki/Light-Traffic" target="_blank">https://github.com/Daipu/COBALT/wiki/Light-Traffic</a><br></div><div><br></div><div>throughput graphs for the same scenario are here:</div><div><br></div><div><a href="https://github.com/Daipu/COBALT/wiki/Instantaneous-throughput:-Light-Traffic" target="_blank">https://github.com/Daipu/COBALT/wiki/Instantaneous-throughput:-Light-Traffic</a><br></div><div><br></div><div>and cwnd graphs here:</div><div><br></div><div><a href="https://github.com/Daipu/COBALT/wiki/cwnd-plots:-Light-traffic" target="_blank">https://github.com/Daipu/COBALT/wiki/cwnd-plots:-Light-traffic</a><br></div><div><br></div><div>So, now what we see is that although queue occupancy is under control and link remains fully utilized, the senders cwnd gets synchronized in one scenario (only when packet size is 1000 bytes and with COBALT). For all other cases, there is no synchronization of cwnd (including COBALT with packet size 1500 bytes).</div><div><br></div><div>By hidden queues, do you mean the NIC buffers? ns-3 has a Linux-like traffic control wherein the packets dequeued by a queue discipline are enqueued into NIC buffer.</div><div><br></div><div>The tasks that we're currently working on are listed here:</div><div><br></div><div><a href="https://github.com/Daipu/COBALT/issues/1" target="_blank">https://github.com/Daipu/COBALT/issues/1</a><br></div><div><br></div><div><div>Thanks a lot for your help. We really appreciate it.</div><br class="gmail-m_6375158607194474958gmail-Apple-interchange-newline"></div><div>Regards,</div><div>Jendaipou Palmei</div><div>Shefali Gupta</div></div></div><br><div class="gmail_quote"><div dir="ltr">On Thu, Dec 6, 2018 at 11:06 PM Jonathan Morton <<a href="mailto:chromatix99@gmail.com">chromatix99@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">>> We're currently working on the following:<br>
>> <br>
>> 1. plots for the actual number of marks/drops per time interval for COBALT, CoDel, and PIE.<br>
>> 2. zoomed in plots on small time intervals to show the dynamic behavior of the algorithm.<br>
>> 3. a file showing the timestamp of each drop.<br>
> <br>
> I await these with interest.<br>
<br>
I noticed that some progress has been made here already, in particular I can now see cwnd graphs which make a very interesting datapoint when directly compared with the throughput and queue-occupancy graphs. It's now definitely clear that the senders are using TCP Reno or some close variant thereof.<br>
<br>
In fact, the three graphs are mutually inconsistent. Aside from the sharp cwnd reduction events, the cwnd of all the flows increases roughly linearly over time, but the throughput remains flat while the queue is almost always empty (for Codel and COBALT). This can only be explained if there's a hidden queue at the bottleneck, perhaps associated with the simulated NIC immediately downstream of the AQM.<br>
<br>
I would suggest checking the simulation setup carefully for hidden queues of this sort.<br>
<br>
- Jonathan Morton<br>
<br>
</blockquote></div>