[Bloat] Thoughts on Stochastic Fair Blue

Eric Dumazet eric.dumazet at gmail.com
Thu Mar 24 06:32:36 PDT 2011


Le jeudi 24 mars 2011 à 14:40 +0200, Jonathan Morton a écrit :

> Finally, it might also be interesting and useful to add bare-bones ECN
> support to the existing "dumb" qdiscs, such as SFQ and the FIFO
> family.  Simply start marking (and dropping non-supporting flows) when
> the queue is more than half full.

Three months ago, I played with a SFQ patch to add ECN support, based on
delay of packet in queue.

http://www.spinics.net/lists/netdev/msg151594.html

This patch is a hack of course (units are jiffies ticks, not ms)


--------------------------------------

Here is the POC patch I am currently testing, with a probability to
"early drop" a packet of one percent per ms (HZ=1000 here), only if
packet stayed at least 4 ms on queue.

Of course, this only apply where SFQ is used, with known SFQ limits :)

The term "early drop" is a lie. RED really early mark/drop a packet at
enqueue() time, while I do it at dequeue() time [since I need to compute
the delay]. But effect is the same on sent packets. This might use a bit
more memory, but no more than current SFQ [and only if flows dont react
to mark/drops]

insmod net/sched/sch_sfq.ko red_delay=4

By the way, I do think we should lower SFQ_DEPTH a bit and increase
SFQ_SLOTS by same amount. Allowing 127 packets per flow seems not
necessary in most situations SFQ might be used.

 net/sched/sch_sfq.c |   37 +++++++++++++++++++++++++++++++++----
 1 files changed, 33 insertions(+), 4 deletions(-)

diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
index d54ac94..4f958e3 100644
--- a/net/sched/sch_sfq.c
+++ b/net/sched/sch_sfq.c
@@ -24,6 +24,8 @@
 #include <net/ip.h>
 #include <net/netlink.h>
 #include <net/pkt_sched.h>
+#include <net/inet_ecn.h>
+#include <linux/moduleparam.h>
 
 
 /*     Stochastic Fairness Queuing algorithm.
@@ -86,6 +88,10 @@
 /* This type should contain at least SFQ_DEPTH + SFQ_SLOTS values */
 typedef unsigned char sfq_index;
 
+static int red_delay; /* default : no RED handling */
+module_param(red_delay, int, 0);
+MODULE_PARM_DESC(red_delay, "mark/drop packets if they stay in queue longer than red_delay ticks");
+
 /*
  * We dont use pointers to save space.
  * Small indexes [0 ... SFQ_SLOTS - 1] are 'pointers' to slots[] array
@@ -391,6 +397,7 @@ sfq_enqueue(struct sk_buff *skb, struct Qdisc *sch)
 
        sch->qstats.backlog += qdisc_pkt_len(skb);
        slot_queue_add(slot, skb);
+       qdisc_skb_cb(skb)->timestamp = jiffies;
        sfq_inc(q, x);
        if (slot->qlen == 1) {          /* The flow is new */
                if (q->tail == NULL) {  /* It is the first flow */
@@ -402,11 +409,8 @@ sfq_enqueue(struct sk_buff *skb, struct Qdisc *sch)
                q->tail = slot;
                slot->allot = q->scaled_quantum;
        }
-       if (++sch->q.qlen <= q->limit) {
-               sch->bstats.bytes += qdisc_pkt_len(skb);
-               sch->bstats.packets++;
+       if (++sch->q.qlen <= q->limit)
                return NET_XMIT_SUCCESS;
-       }
 
        sfq_drop(sch);
        return NET_XMIT_CN;
@@ -432,6 +436,7 @@ sfq_dequeue(struct Qdisc *sch)
        sfq_index a, next_a;
        struct sfq_slot *slot;
 
+restart:
        /* No active slots */
        if (q->tail == NULL)
                return NULL;
@@ -455,12 +460,36 @@ next_slot:
                next_a = slot->next;
                if (a == next_a) {
                        q->tail = NULL; /* no more active slots */
+                       /* last packet queued, dont even try to apply RED */
                        return skb;
                }
                q->tail->next = next_a;
        } else {
                slot->allot -= SFQ_ALLOT_SIZE(qdisc_pkt_len(skb));
        }
+       if (red_delay) {
+               long delay = jiffies - qdisc_skb_cb(skb)->timestamp;
+
+               if (delay >= red_delay) {
+                       long Px = delay * (0xFFFFFF / 100); /* 1 percent per jiffy */
+                       if ((net_random() & 0xFFFFFF) < Px) {
+                               if (INET_ECN_set_ce(skb)) {
+                                       /* no ecnmark counter yet :) */
+                                       sch->qstats.overlimits++;
+                               } else {
+                                       /* penalize this flow : we drop the 
+                                        * packet while we changed slot->allot
+                                        */
+                                       kfree_skb(skb);
+                                       /* no early_drop counter yet :) */
+                                       sch->qstats.drops++;
+                                       goto restart;
+                               }
+                       }
+               }
+       }
+       sch->bstats.bytes += qdisc_pkt_len(skb);
+       sch->bstats.packets++;
        return skb;
 }
 




More information about the Bloat mailing list