* Re: [Bloat] Thoughts on Stochastic Fair Blue
@ 2011-03-24 13:31 Dave Taht
0 siblings, 0 replies; 10+ messages in thread
From: Dave Taht @ 2011-03-24 13:31 UTC (permalink / raw)
To: bloat
Just we've done with debloat-testing, I'd like to have a git tree with
all the common shaper scripts in it.
We can do it on github, infradead, someone's private domain, etc.
suggestion for a name (debloat-shapers)?
Location?
--
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
http://the-edge.blogspot.com
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Bloat] Thoughts on Stochastic Fair Blue
2011-04-03 16:35 ` Juliusz Chroboczek
@ 2011-04-03 18:03 ` Jonathan Morton
0 siblings, 0 replies; 10+ messages in thread
From: Jonathan Morton @ 2011-04-03 18:03 UTC (permalink / raw)
To: Juliusz Chroboczek; +Cc: bloat
On 3 Apr, 2011, at 7:35 pm, Juliusz Chroboczek wrote:
> Sorry for the delay, but I wanted to think this over.
>
>> My second observation is that marking and dropping both happen at the
>> tail of the queue, not the head. This delays the congestion information
>> reaching the receiver, and from there to the sender, by the length of
>> the queue
>
> It's very difficult to drop at the head of the queue in SFB, since we'd
> need to find a suitable packet that's in the same flow. Since SFB makes
> heroic efforts to keep the queue size small, this shouldn't make much of
> the difference.
>
> Your suggestion is most certainly valid for plain BLUE.
If the queue is overfull, then the efforts at proactive congestion control have failed and tail-dropping is fine.
I was thinking more of the probabilistic marking/dropping that occurs under normal conditions, which should occur at the head of the queue to minimise feedback latency. The head of the queue needs to decrement the bucket counters anyway, so it shouldn't be extra overhead to move the probability check here.
>> Another observation is that packets are not re-ordered by SFB, which
>> (given the Bloom filter) is potentially a missed opportunity.
>
> What's your suggestion?
>
>> However, they can be re-ordered in the current implementation by
>> using child qdiscs such as SFQ
>
> I don't see how that buys you anything. SFB is very aggressive with
> dropping packets when under congestion, and the packet drop happens
> *before* the child qdisc gets a chance to see the packet; hance, putting
> SFQ below SFB won't buy you much, it'll just reorder packets in the
> small queue that SFB allows. Or am I missing something?
To an analogue modem or a GPRS connection, even a single default SFB bucket can take a significant time to drain. This gets worse if there are several independent flows. (And this is not necessarily abuse by the end-user, but a legitimate result of going out of range of the 3G signal while using it as such.)
Consider the extreme case, where you have a dozen flows each filling their bucket with a dozen packets, and then a DNS reply packet arrives. Without packet reordering, the DNS packet must wait for 144 packets to get out of it's way, which could take tens of seconds, so the DNS resolver will time out. With it, it only has to wait for 12 packets (possibly less still with DRR, which I haven't investigated in detail), which is fast enough that the connection, though very sluggish, is still functional.
Under a less extreme scenario, suppose I'm downloading an iApp update to my phone, and meanwhile it decides to check for new email - IMAP being a relatively chatty request-response protocol, without large amounts of data being transferred. Then the train goes under a bridge and suddenly I'm on GPRS, so the tower's queue fills up with iApp. With packet reordering, the IMAP protocol keeps going fast enough to download a new email in a few seconds, and then happily get out of the way. Without it, the iApp download monopolises the connection and the IMAP job will take minutes to complete. Email is a canonical low-bandwidth application; users reasonably expect it to Just Work.
Bear in mind also that my 3G data subscription, though unlimited in per-month traffic, is limited to 500Kbps instantaneous, and is therefore only one order of magnitude faster than an analogue modem. Packet reordering wouldn't have such a dramatic effect on functionality as at the slower speed, but it would still make the connection feel better to use. Given that IP packet aggregation into over-the-air frames is normal practice, this would effective put one packet from each of several flows into each frame, which is also a nice form of interleaving that helps to reduce the impact of transitory unreachability.
If you kept a list of packets in each bucket, rather than just the count of them, then you could simply iterate over the buckets (in all rows) when dequeueing, doing a *better* job than SFQ because you have a different hash salt in each row. You would then need to delete the entry from the relevant bucket in all of the rows - this *is* extra overhead, but it's probably no greater than passing it down to SFQ.
- Jonathan
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Bloat] Thoughts on Stochastic Fair Blue
2011-03-24 20:44 ` richard
@ 2011-04-03 16:35 ` Juliusz Chroboczek
2011-04-03 18:03 ` Jonathan Morton
0 siblings, 1 reply; 10+ messages in thread
From: Juliusz Chroboczek @ 2011-04-03 16:35 UTC (permalink / raw)
To: Jonathan Morton; +Cc: bloat
Sorry for the delay, but I wanted to think this over.
> My second observation is that marking and dropping both happen at the
> tail of the queue, not the head. This delays the congestion information
> reaching the receiver, and from there to the sender, by the length of
> the queue
It's very difficult to drop at the head of the queue in SFB, since we'd
need to find a suitable packet that's in the same flow. Since SFB makes
heroic efforts to keep the queue size small, this shouldn't make much of
the difference.
Your suggestion is most certainly valid for plain BLUE.
> Another observation is that packets are not re-ordered by SFB, which
> (given the Bloom filter) is potentially a missed opportunity.
What's your suggestion?
> However, they can be re-ordered in the current implementation by
> using child qdiscs such as SFQ
I don't see how that buys you anything. SFB is very aggressive with
dropping packets when under congestion, and the packet drop happens
*before* the child qdisc gets a chance to see the packet; hance, putting
SFQ below SFB won't buy you much, it'll just reorder packets in the
small queue that SFB allows. Or am I missing something?
-- Juliusz
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Bloat] Thoughts on Stochastic Fair Blue
2011-03-24 5:30 ` Dave Hart
@ 2011-03-24 20:44 ` richard
2011-04-03 16:35 ` Juliusz Chroboczek
0 siblings, 1 reply; 10+ messages in thread
From: richard @ 2011-03-24 20:44 UTC (permalink / raw)
To: Dave Hart; +Cc: bloat
On Thu, 2011-03-24 at 05:30 +0000, Dave Hart wrote:
> On Thu, Mar 24, 2011 at 01:03 UTC, Juliusz Chroboczek wrote:
> >
> > [1] Aleksandar Kuzmanovic. The power of explicit congestion
> > notification. In Proceedings of the 2005 conference on Applications,
> > technologies, architectures, and protocols for computer
> > communications. 2005.
>
> http://www.cs.northwestern.edu/~akuzma/doc/ecn.pdf
>
> appears to be the paper Juliusz cited.
>
If nothing else, I take away from this paper that ECN should be applied
(at least) on servers (and they advocate clients and routers) to TCP
control packets (e.g. SYN and ACK packets) as well as data packets
despite the potential (accepted admin legend???) that this might be a
"bad thing" for reasons of aiding a potential SYN-flood attack vector.
The point that doing this provides "instant" goodness for the ECN
machines and only gradual degradation of non-ECN machines provides
incentive to switch on ECN (and change to adding it to SYN/ACK too) no
matter what AQM is being used, even the brain-dead one they came up with
to test against RED.
richard
> Cheers,
> Dave Hart
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Richard C. Pitt Pacific Data Capture
rcpitt@pacdat.net 604-644-9265
http://digital-rag.com www.pacdat.net
PGP Fingerprint: FCEF 167D 151B 64C4 3333 57F0 4F18 AF98 9F59 DD73
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Bloat] Thoughts on Stochastic Fair Blue
2011-03-24 12:40 ` Jonathan Morton
@ 2011-03-24 13:32 ` Eric Dumazet
0 siblings, 0 replies; 10+ messages in thread
From: Eric Dumazet @ 2011-03-24 13:32 UTC (permalink / raw)
To: Jonathan Morton; +Cc: bloat
Le jeudi 24 mars 2011 à 14:40 +0200, Jonathan Morton a écrit :
> Finally, it might also be interesting and useful to add bare-bones ECN
> support to the existing "dumb" qdiscs, such as SFQ and the FIFO
> family. Simply start marking (and dropping non-supporting flows) when
> the queue is more than half full.
Three months ago, I played with a SFQ patch to add ECN support, based on
delay of packet in queue.
http://www.spinics.net/lists/netdev/msg151594.html
This patch is a hack of course (units are jiffies ticks, not ms)
--------------------------------------
Here is the POC patch I am currently testing, with a probability to
"early drop" a packet of one percent per ms (HZ=1000 here), only if
packet stayed at least 4 ms on queue.
Of course, this only apply where SFQ is used, with known SFQ limits :)
The term "early drop" is a lie. RED really early mark/drop a packet at
enqueue() time, while I do it at dequeue() time [since I need to compute
the delay]. But effect is the same on sent packets. This might use a bit
more memory, but no more than current SFQ [and only if flows dont react
to mark/drops]
insmod net/sched/sch_sfq.ko red_delay=4
By the way, I do think we should lower SFQ_DEPTH a bit and increase
SFQ_SLOTS by same amount. Allowing 127 packets per flow seems not
necessary in most situations SFQ might be used.
net/sched/sch_sfq.c | 37 +++++++++++++++++++++++++++++++++----
1 files changed, 33 insertions(+), 4 deletions(-)
diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
index d54ac94..4f958e3 100644
--- a/net/sched/sch_sfq.c
+++ b/net/sched/sch_sfq.c
@@ -24,6 +24,8 @@
#include <net/ip.h>
#include <net/netlink.h>
#include <net/pkt_sched.h>
+#include <net/inet_ecn.h>
+#include <linux/moduleparam.h>
/* Stochastic Fairness Queuing algorithm.
@@ -86,6 +88,10 @@
/* This type should contain at least SFQ_DEPTH + SFQ_SLOTS values */
typedef unsigned char sfq_index;
+static int red_delay; /* default : no RED handling */
+module_param(red_delay, int, 0);
+MODULE_PARM_DESC(red_delay, "mark/drop packets if they stay in queue longer than red_delay ticks");
+
/*
* We dont use pointers to save space.
* Small indexes [0 ... SFQ_SLOTS - 1] are 'pointers' to slots[] array
@@ -391,6 +397,7 @@ sfq_enqueue(struct sk_buff *skb, struct Qdisc *sch)
sch->qstats.backlog += qdisc_pkt_len(skb);
slot_queue_add(slot, skb);
+ qdisc_skb_cb(skb)->timestamp = jiffies;
sfq_inc(q, x);
if (slot->qlen == 1) { /* The flow is new */
if (q->tail == NULL) { /* It is the first flow */
@@ -402,11 +409,8 @@ sfq_enqueue(struct sk_buff *skb, struct Qdisc *sch)
q->tail = slot;
slot->allot = q->scaled_quantum;
}
- if (++sch->q.qlen <= q->limit) {
- sch->bstats.bytes += qdisc_pkt_len(skb);
- sch->bstats.packets++;
+ if (++sch->q.qlen <= q->limit)
return NET_XMIT_SUCCESS;
- }
sfq_drop(sch);
return NET_XMIT_CN;
@@ -432,6 +436,7 @@ sfq_dequeue(struct Qdisc *sch)
sfq_index a, next_a;
struct sfq_slot *slot;
+restart:
/* No active slots */
if (q->tail == NULL)
return NULL;
@@ -455,12 +460,36 @@ next_slot:
next_a = slot->next;
if (a == next_a) {
q->tail = NULL; /* no more active slots */
+ /* last packet queued, dont even try to apply RED */
return skb;
}
q->tail->next = next_a;
} else {
slot->allot -= SFQ_ALLOT_SIZE(qdisc_pkt_len(skb));
}
+ if (red_delay) {
+ long delay = jiffies - qdisc_skb_cb(skb)->timestamp;
+
+ if (delay >= red_delay) {
+ long Px = delay * (0xFFFFFF / 100); /* 1 percent per jiffy */
+ if ((net_random() & 0xFFFFFF) < Px) {
+ if (INET_ECN_set_ce(skb)) {
+ /* no ecnmark counter yet :) */
+ sch->qstats.overlimits++;
+ } else {
+ /* penalize this flow : we drop the
+ * packet while we changed slot->allot
+ */
+ kfree_skb(skb);
+ /* no early_drop counter yet :) */
+ sch->qstats.drops++;
+ goto restart;
+ }
+ }
+ }
+ }
+ sch->bstats.bytes += qdisc_pkt_len(skb);
+ sch->bstats.packets++;
return skb;
}
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Bloat] Thoughts on Stochastic Fair Blue
2011-03-24 1:03 ` Juliusz Chroboczek
2011-03-24 5:30 ` Dave Hart
2011-03-24 11:59 ` Jim Gettys
@ 2011-03-24 12:40 ` Jonathan Morton
2011-03-24 13:32 ` Eric Dumazet
2 siblings, 1 reply; 10+ messages in thread
From: Jonathan Morton @ 2011-03-24 12:40 UTC (permalink / raw)
To: Juliusz Chroboczek; +Cc: bloat
On 24 Mar, 2011, at 3:03 am, Juliusz Chroboczek wrote:
> (I'm the original author of sch_sfb.)
>
>> Having read some more documents and code, I have some extra insight into
>> SFB that might prove helpful. Note that I haven't actually tried it
>> yet, but it looks good anyway. In control-systems parlance, this is
>> effectively a multichannel I-type controller, where RED is
>> a single-channel P-type controller.
>
> Methinks that it would be worthwile to implement plain BLUE, in order to
> see how it compares. (Of course, once Jim comes down from Mount Sinai
> and hands us RED-Lite, it might also be worth thinking about SFRed.)
I'd be interested to see if you can make a BLUE implementation which doesn't throw a wobbler with lossy child qdiscs. Because there's only one queue, you should be able to query the child's queue length instead of maintaining it internally.
I'd *also* be interested in an SFB implementation which also has the packet-reordering characteristics of SFQ built-in, so that applying child qdiscs to it would be unnecessary. I'm just about to try putting this combination together manually on a live network.
Finally, it might also be interesting and useful to add bare-bones ECN support to the existing "dumb" qdiscs, such as SFQ and the FIFO family. Simply start marking (and dropping non-supporting flows) when the queue is more than half full.
>> My first thought after reading just the paper was that unconditionally
>> dropping the packets which increase the marking probability was suspect.
>> It should be quite possible to manage a queue using just ECN, without
>> any packet loss, in simple cases such as a single bulk TCP flow. Thus
>> I am pleased to see that the SFB code in the debloat tree has separate
>> thresholds for increasing the marking rate and tail-dropping. They are
>> fairly close together, but they are at least distinct.
>
> I hesitated for a long time before doing that, and would dearly like to
> see some conclusive experimental data that this is a good idea. The
> trouble is -- when the drop rate is too low, we risk receiving a burst
> of packets from a traditional TCP sender. Having the drop threshold
> larger than the increase threshold will get such bursts into our
> buffer. I'm not going to explain on this particular list why such
> bursts are ungood ;-)
Actually, we *do* need to support momentary bursts of packets, although with ECN we should expect these excursions to be smaller and less frequent than without it. The primary cause of a large packet burst is presumably from packet loss recovery, although some broken TCPs can produce them with no provocation.
At the bare minimum, you need to support ECN-marking the first triggering packet rather than dropping it. The goal here is to have functioning congestion control without packet loss (a concept which should theoretically please the Cisco crowd). With BLUE as described in the paper, a packet would always be dropped before ECN marking started, and that is what I was concerned about. With even a small extra buffer on top, the TCP has some chance to back off before loss occurs.
With packet reordering like SFQ, the effects of bursts of packets on a single flow are (mostly) isolated to that flow. I think it's better to accommodate them than to squash them, especially as dropping packets will lead to more bursts as the sending TCP tries to compensate and recover.
> The other, somewhat unrelated, issue you should be aware of is that
> ECN marking has some issues in highly congested networks [1]; this is
> the reason why sch_sfb will start dropping after the mark probability
> has gone above 1/2.
I haven't had time to read the paper thoroughly, but I don't argue with this - if the marking probability goes above 1/2 then you probably have an unresponsive flow anyway. I can't imagine any sane TCP responding so aggressively to the equivalent of a 50% packet loss.
>> the length of the queue - which does not appear to be self-tuned by
>> the flow rate. However, the default values appear to be sensible.
>
> Please clarify.
The consensus seems to be that queue length should depend on bandwidth - if we assume that link latency is negligible, then the RTT is usually dominated by the general Internet, assumed constant at 100ms. OTOH, there is another school of thought which says that queue length must *also* depend on the number of flows, with a greater number of flows causing a shortening in optimum queue length (because the bandwidth and thus burst size from an individual flow is smaller).
But tuning the queue length might not actually be necessary, provided the qdisc is sufficiently sophisticated in other ways. We shall see.
>> The major concern with this arrangement is the incompatibility with
>> qdiscs that can drop packets internally, since this is not necessarily
>> obvious to end-user admins.
>
> Agreed. More generally, Linux' qdisc setup is error-prone, and
> certainly beyond the abilities of the people we're targeting; we need to
> get a bunch of reasonable defaults into distributions. (Please start
> with OpenWRT, whose qos-scripts package[2] is used by a fair number of
> people.)
Something better than pfifo_fast is definitely warranted by default, except on the tiniest embedded devices which cannot cope with the memory requirements. But those are always a corner case.
>> I also thought of a different way to implement the hash rotation.
>> Instead of shadowing the entire set of buckets, simply replace the hash
>> on one row at a time. This requires that the next-to-minimum values for
>> q_len and p_mark are used, rather than the strict minima. It is still
>> necessary to calculate two hash values for each packet, but the memory
>> requirements are reduced at the expense of effectively removing one row
>> from the Bloom filter.
>
> Interesting idea.
- Jonathan
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Bloat] Thoughts on Stochastic Fair Blue
2011-03-24 1:03 ` Juliusz Chroboczek
2011-03-24 5:30 ` Dave Hart
@ 2011-03-24 11:59 ` Jim Gettys
2011-03-24 12:40 ` Jonathan Morton
2 siblings, 0 replies; 10+ messages in thread
From: Jim Gettys @ 2011-03-24 11:59 UTC (permalink / raw)
To: bloat
On 03/24/2011 02:03 AM, Juliusz Chroboczek wrote:
>
> Methinks that it would be worthwile to implement plain BLUE, in order to
> see how it compares. (Of course, once Jim comes down from Mount Sinai
> and hands us RED-Lite, it might also be worth thinking about SFRed.)
>
Heh. I'll try, but being Moses is really tough, given god's previous
track record...
For god's sake, don't wait for god... And keep in mind that god has
been wrong in the past. We need do breadth first search through the
solution space.
- Jim
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Bloat] Thoughts on Stochastic Fair Blue
2011-03-24 1:03 ` Juliusz Chroboczek
@ 2011-03-24 5:30 ` Dave Hart
2011-03-24 20:44 ` richard
2011-03-24 11:59 ` Jim Gettys
2011-03-24 12:40 ` Jonathan Morton
2 siblings, 1 reply; 10+ messages in thread
From: Dave Hart @ 2011-03-24 5:30 UTC (permalink / raw)
To: bloat
On Thu, Mar 24, 2011 at 01:03 UTC, Juliusz Chroboczek wrote:
>
> [1] Aleksandar Kuzmanovic. The power of explicit congestion
> notification. In Proceedings of the 2005 conference on Applications,
> technologies, architectures, and protocols for computer
> communications. 2005.
http://www.cs.northwestern.edu/~akuzma/doc/ecn.pdf
appears to be the paper Juliusz cited.
Cheers,
Dave Hart
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Bloat] Thoughts on Stochastic Fair Blue
2011-03-13 1:40 Jonathan Morton
@ 2011-03-24 1:03 ` Juliusz Chroboczek
2011-03-24 5:30 ` Dave Hart
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: Juliusz Chroboczek @ 2011-03-24 1:03 UTC (permalink / raw)
To: Jonathan Morton; +Cc: bloat
(I'm the original author of sch_sfb.)
> Having read some more documents and code, I have some extra insight into
> SFB that might prove helpful. Note that I haven't actually tried it
> yet, but it looks good anyway. In control-systems parlance, this is
> effectively a multichannel I-type controller, where RED is
> a single-channel P-type controller.
Methinks that it would be worthwile to implement plain BLUE, in order to
see how it compares. (Of course, once Jim comes down from Mount Sinai
and hands us RED-Lite, it might also be worth thinking about SFRed.)
> My first thought after reading just the paper was that unconditionally
> dropping the packets which increase the marking probability was suspect.
> It should be quite possible to manage a queue using just ECN, without
> any packet loss, in simple cases such as a single bulk TCP flow. Thus
> I am pleased to see that the SFB code in the debloat tree has separate
> thresholds for increasing the marking rate and tail-dropping. They are
> fairly close together, but they are at least distinct.
I hesitated for a long time before doing that, and would dearly like to
see some conclusive experimental data that this is a good idea. The
trouble is -- when the drop rate is too low, we risk receiving a burst
of packets from a traditional TCP sender. Having the drop threshold
larger than the increase threshold will get such bursts into our
buffer. I'm not going to explain on this particular list why such
bursts are ungood ;-)
The other, somewhat unrelated, issue you should be aware of is that
ECN marking has some issues in highly congested networks [1]; this is
the reason why sch_sfb will start dropping after the mark probability
has gone above 1/2.
> the length of the queue - which does not appear to be self-tuned by
> the flow rate. However, the default values appear to be sensible.
Please clarify.
> The major concern with this arrangement is the incompatibility with
> qdiscs that can drop packets internally, since this is not necessarily
> obvious to end-user admins.
Agreed. More generally, Linux' qdisc setup is error-prone, and
certainly beyond the abilities of the people we're targeting; we need to
get a bunch of reasonable defaults into distributions. (Please start
with OpenWRT, whose qos-scripts package[2] is used by a fair number of
people.)
> I also thought of a different way to implement the hash rotation.
> Instead of shadowing the entire set of buckets, simply replace the hash
> on one row at a time. This requires that the next-to-minimum values for
> q_len and p_mark are used, rather than the strict minima. It is still
> necessary to calculate two hash values for each packet, but the memory
> requirements are reduced at the expense of effectively removing one row
> from the Bloom filter.
Interesting idea.
--Juliusz
[1] Aleksandar Kuzmanovic. The power of explicit congestion
notification. In Proceedings of the 2005 conference on Applications,
technologies, architectures, and protocols for computer
communications. 2005.
[2] https://dev.openwrt.org/browser/trunk/package/qos-scripts/
^ permalink raw reply [flat|nested] 10+ messages in thread
* [Bloat] Thoughts on Stochastic Fair Blue
@ 2011-03-13 1:40 Jonathan Morton
2011-03-24 1:03 ` Juliusz Chroboczek
0 siblings, 1 reply; 10+ messages in thread
From: Jonathan Morton @ 2011-03-13 1:40 UTC (permalink / raw)
To: bloat
Having read some more documents and code, I have some extra insight into SFB that might prove helpful. Note that I haven't actually tried it yet, but it looks good anyway. In control-systems parlance, this is effectively a multichannel I-type controller, where RED is a single-channel P-type controller.
My first thought after reading just the paper was that unconditionally dropping the packets which increase the marking probability was suspect. It should be quite possible to manage a queue using just ECN, without any packet loss, in simple cases such as a single bulk TCP flow. Thus I am pleased to see that the SFB code in the debloat tree has separate thresholds for increasing the marking rate and tail-dropping. They are fairly close together, but they are at least distinct.
My second observation is that marking and dropping both happen at the tail of the queue, not the head. This delays the congestion information reaching the receiver, and from there to the sender, by the length of the queue - which does not appear to be self-tuned by the flow rate. However, the default values appear to be sensible.
Another observation is that packets are not re-ordered by SFB, which (given the Bloom filter) is potentially a missed opportunity. However, they can be re-ordered in the current implementation by using child qdiscs such as SFQ, with or without HTB in tandem to capture the queue from a downstream "dumb" device. The major concern with this arrangement is the incompatibility with qdiscs that can drop packets internally, since this is not necessarily obvious to end-user admins.
I also thought of a different way to implement the hash rotation. Instead of shadowing the entire set of buckets, simply replace the hash on one row at a time. This requires that the next-to-minimum values for q_len and p_mark are used, rather than the strict minima. It is still necessary to calculate two hash values for each packet, but the memory requirements are reduced at the expense of effectively removing one row from the Bloom filter.
- Jonathan
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2011-04-03 18:04 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-03-24 13:31 [Bloat] Thoughts on Stochastic Fair Blue Dave Taht
-- strict thread matches above, loose matches on Subject: below --
2011-03-13 1:40 Jonathan Morton
2011-03-24 1:03 ` Juliusz Chroboczek
2011-03-24 5:30 ` Dave Hart
2011-03-24 20:44 ` richard
2011-04-03 16:35 ` Juliusz Chroboczek
2011-04-03 18:03 ` Jonathan Morton
2011-03-24 11:59 ` Jim Gettys
2011-03-24 12:40 ` Jonathan Morton
2011-03-24 13:32 ` Eric Dumazet
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox