On Wed, 11 May 2016, David Lang wrote: > On Wed, 11 May 2016, Toke Høiland-Jørgensen wrote: > >> Luca Muscariello writes: >> >>> Do you happen to recall what precision you achieved or how much the >>> precision was really important? Several papers seem to assume that very >>> high precision is not terribly important since it all evens out in the >>> end, and I can see how that could be true; but would like to have it >>> confirmed :) >>> >>> what do you mean with precision? >>> Do you mean in measuring the PHY rate? Short term vs long term >>> measurements? else? >> >> Yes, in measuring the rate. Was this a per-packet thing, and were you >> actually able to get information sufficiently accurate to achieve the >> desired level of fairness? And by what mechanism? Was this in the driver >> or higher up in the stack? > > I expect that if you were able to change this even once/sec and account for > the rate you would be far better than what we have now. by the way, I have logs from the last two scale conferences of the /sys data per-station showing the rate info. if you give me a way to send you the multi-G file you can look through it to see how rapidly the rate changes for a given station under real-world high-user-density conditions. I suspect that the biggest problem right now is that the higher level scheduling isn't accounting for "station X is at rate 1, station Y is at rate 100" and is trying to be 'fair' by sending the same amount of data to each of them. David Lang