Development issues regarding the cerowrt test router project
 help / color / mirror / Atom feed
* Re: [Cerowrt-devel] 3.3rc7-5 is out
       [not found] <mailman.2.1331924401.3675.cerowrt-devel@lists.bufferbloat.net>
@ 2012-03-17  1:40 ` Richard Brown
  2012-03-17  1:45   ` Dave Taht
  0 siblings, 1 reply; 11+ messages in thread
From: Richard Brown @ 2012-03-17  1:40 UTC (permalink / raw)
  To: <cerowrt-devel@lists.bufferbloat.net>

I just installed 3.3-rc7-5 and all seems fine in my limited testing. 

I see there's a Network -> AQM tab in the GUI. It seems similar to the QoS tab. 

Should the doc's tell people to use the same procedure as the QoS? That is, should people

- Disable QoS (the default is disabled)
- Run a speed test, say on http://speedtest.net
- Enable QoS, and set the download and upload speeds to a few percent less than the speedtest indicates.

Thanks.



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Cerowrt-devel] 3.3rc7-5 is out
  2012-03-17  1:40 ` [Cerowrt-devel] 3.3rc7-5 is out Richard Brown
@ 2012-03-17  1:45   ` Dave Taht
  2012-03-17  2:04     ` [Cerowrt-devel] What version of CeroWrt is running at http://jupiter.lab.bufferbloat.net Richard Brown
  2012-03-17  2:32     ` [Cerowrt-devel] 3.3rc7-5 is out Sebastian Moeller
  0 siblings, 2 replies; 11+ messages in thread
From: Dave Taht @ 2012-03-17  1:45 UTC (permalink / raw)
  To: Richard Brown; +Cc: <cerowrt-devel@lists.bufferbloat.net>

heh. Leave AQM undocumented for now. It's got issues on ingress.

On Fri, Mar 16, 2012 at 6:40 PM, Richard Brown
<richard.e.brown@dartware.com> wrote:
> I just installed 3.3-rc7-5 and all seems fine in my limited testing.
>
> I see there's a Network -> AQM tab in the GUI. It seems similar to the QoS tab.
>
> Should the doc's tell people to use the same procedure as the QoS? That is, should people
>
> - Disable QoS (the default is disabled)
> - Run a speed test, say on http://speedtest.net
> - Enable QoS, and set the download and upload speeds to a few percent less than the speedtest indicates.
>
> Thanks.
>
>
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel



-- 
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
http://www.bufferbloat.net

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Cerowrt-devel] What version of CeroWrt is running at http://jupiter.lab.bufferbloat.net.
  2012-03-17  1:45   ` Dave Taht
@ 2012-03-17  2:04     ` Richard Brown
  2012-03-17  2:20       ` Dave Taht
  2012-03-17  2:32     ` [Cerowrt-devel] 3.3rc7-5 is out Sebastian Moeller
  1 sibling, 1 reply; 11+ messages in thread
From: Richard Brown @ 2012-03-17  2:04 UTC (permalink / raw)
  To: Dave Taht; +Cc: Richard Brown, <cerowrt-devel@lists.bufferbloat.net>

I see that Jupiter is running an older version of CeroWrt - it still mentions Ocean City in the built-in web pages. Is this intended?

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Cerowrt-devel] What version of CeroWrt is running at http://jupiter.lab.bufferbloat.net.
  2012-03-17  2:04     ` [Cerowrt-devel] What version of CeroWrt is running at http://jupiter.lab.bufferbloat.net Richard Brown
@ 2012-03-17  2:20       ` Dave Taht
  2012-03-17  2:30         ` Sebastian Moeller
  2012-03-17  3:00         ` Richard Brown
  0 siblings, 2 replies; 11+ messages in thread
From: Dave Taht @ 2012-03-17  2:20 UTC (permalink / raw)
  To: Richard Brown; +Cc: <cerowrt-devel@lists.bufferbloat.net>

Heh. It's the main router for the lab... it runs dns... it has a vpn
on it... it's running rc6... it's STABLE.

I've updated 4 (out of 9) of the machines in the lab thus far, and
basically plan
to cycle through them all with successive releases before updating jupiter.

If you have working ipv6, the external gui is available....

http://[2001:4f8:fff8:600::1]/cerowrt/ as one example

http://europa.lab.bufferbloat.net as another (both ipv6 and ipv4)

I'd appreciate knowing that port 81 (the router's configuration web
server) and ssh were indeed blocked
from the outside world.....


On Fri, Mar 16, 2012 at 7:04 PM, Richard Brown
<richard.e.brown@dartware.com> wrote:
> I see that Jupiter is running an older version of CeroWrt - it still mentions Ocean City in the built-in web pages. Is this intended?



-- 
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
http://www.bufferbloat.net

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Cerowrt-devel] What version of CeroWrt is running at http://jupiter.lab.bufferbloat.net.
  2012-03-17  2:20       ` Dave Taht
@ 2012-03-17  2:30         ` Sebastian Moeller
  2012-03-17  3:00         ` Richard Brown
  1 sibling, 0 replies; 11+ messages in thread
From: Sebastian Moeller @ 2012-03-17  2:30 UTC (permalink / raw)
  To: Dave Taht; +Cc: cerowrt-devel

Hi Dave,


On Mar 16, 2012, at 7:20 PM, Dave Taht wrote:

> Heh. It's the main router for the lab... it runs dns... it has a vpn
> on it... it's running rc6... it's STABLE.
> 
> I've updated 4 (out of 9) of the machines in the lab thus far, and
> basically plan
> to cycle through them all with successive releases before updating jupiter.
> 
> If you have working ipv6, the external gui is available....
> 
> http://[2001:4f8:fff8:600::1]/cerowrt/ as one example
> 
> http://europa.lab.bufferbloat.net as another (both ipv6 and ipv4)
> 
> I'd appreciate knowing that port 81 (the router's configuration web
> server) and ssh were indeed blocked
> from the outside world…..

	Both http://europa.lab.bufferbloat.net:81 and ssh root@europa.lab.bufferbloat.net do not work for me (currently from 75.142.58.156), just as you would expect.

best
	Sebastian

> 
> 
> On Fri, Mar 16, 2012 at 7:04 PM, Richard Brown
> <richard.e.brown@dartware.com> wrote:
>> I see that Jupiter is running an older version of CeroWrt - it still mentions Ocean City in the built-in web pages. Is this intended?
> 
> 
> 
> -- 
> Dave Täht
> SKYPE: davetaht
> US Tel: 1-239-829-5608
> http://www.bufferbloat.net
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Cerowrt-devel] 3.3rc7-5 is out
  2012-03-17  1:45   ` Dave Taht
  2012-03-17  2:04     ` [Cerowrt-devel] What version of CeroWrt is running at http://jupiter.lab.bufferbloat.net Richard Brown
@ 2012-03-17  2:32     ` Sebastian Moeller
  2012-03-17  2:48       ` Dave Taht
  1 sibling, 1 reply; 11+ messages in thread
From: Sebastian Moeller @ 2012-03-17  2:32 UTC (permalink / raw)
  To: Dave Taht; +Cc: Richard Brown, <cerowrt-devel@lists.bufferbloat.net>

Hi Dave,


On Mar 16, 2012, at 6:45 PM, Dave Taht wrote:

> heh. Leave AQM undocumented for now. It's got issues on ingress.

	Pooh, and I thought I had fudged my instance of AQM so ingress broke :). BTW is there an easy way to get the most recent debloat script directly from cerowrt?

Best
	Sebastian

> 
> On Fri, Mar 16, 2012 at 6:40 PM, Richard Brown
> <richard.e.brown@dartware.com> wrote:
>> I just installed 3.3-rc7-5 and all seems fine in my limited testing.
>> 
>> I see there's a Network -> AQM tab in the GUI. It seems similar to the QoS tab.
>> 
>> Should the doc's tell people to use the same procedure as the QoS? That is, should people
>> 
>> - Disable QoS (the default is disabled)
>> - Run a speed test, say on http://speedtest.net
>> - Enable QoS, and set the download and upload speeds to a few percent less than the speedtest indicates.
>> 
>> Thanks.
>> 
>> 
>> _______________________________________________
>> Cerowrt-devel mailing list
>> Cerowrt-devel@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> 
> 
> 
> -- 
> Dave Täht
> SKYPE: davetaht
> US Tel: 1-239-829-5608
> http://www.bufferbloat.net
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Cerowrt-devel] 3.3rc7-5 is out
  2012-03-17  2:32     ` [Cerowrt-devel] 3.3rc7-5 is out Sebastian Moeller
@ 2012-03-17  2:48       ` Dave Taht
  2012-03-17  3:03         ` [Cerowrt-devel] Link to SFQRED description? Richard Brown
  0 siblings, 1 reply; 11+ messages in thread
From: Dave Taht @ 2012-03-17  2:48 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: Richard Brown, <cerowrt-devel@lists.bufferbloat.net>

On Fri, Mar 16, 2012 at 7:32 PM, Sebastian Moeller <moeller0@gmx.de> wrote:
> Hi Dave,
>
>
> On Mar 16, 2012, at 6:45 PM, Dave Taht wrote:
>
>> heh. Leave AQM undocumented for now. It's got issues on ingress.
>
>        Pooh, and I thought I had fudged my instance of AQM so ingress broke :). BTW is there an easy way to get the most recent debloat script directly from cerowrt?

sure, but it won't help. You can just slam the debloat script from the
deBloat repo on github on there...

0) In most asymmetric situations ingress control doesn't matter
anywhere near as much as egress.

1) Ingress issue is pesky. I don't get what's going wrong. It's
something v6 related, I think.

2) There are not 1, but 4 out-of-tree versions of the debloat script
now - one each for 4Mbit comcast, 9Mbit comcast, 26Mbit FIOS, and
15Mbit DSL.

all with pretty widely different configurations.

The data points collected thus far are quite resistant to algorithmic
analysis, and are very kernel and workload dependent. Worse, at higher
speeds (60Mbit and above) running tests on the router itself
heisenbugs the results.

Here are the variables:

tx ring size. Presently 4 by default. Last year, at really low speed
uplinks, 2 was better than 4. Above 60Mbit, 4 is not enough.

bql seems to do the job without fiddling with this anymore, I tend to
use 16 or 32 these days for tx ring.

bql size. Depending on the script involved this varies from automatic
(which is usually about 8x bigger than what
seems 'right'), to 3k (which is good up to about 110Mbit), to about
18k (up to 260Mbit)

sfqred related stuff

limit - range of 200-300 seems to work up to about 80-120Mbit. I'll
argue based on the data in point 4, below, that in the real world it
needs to be more kleinrock-like, with a large estimate for flows.

min =3k
max =18k
probability ranges of .12 to .2
redflowlimit - presently very small, not suitable for much more than 60Mbit/sec

Tuning all the above is tricky, and sfq changes how you have to think
about how red works...

htb and htb quantum size - at low rates, you want a minimal quantum,
at higher rates... don't know...

ecn works GREAT. And most of my data is with ecn on. ecn off changes matters....

It's a hell of a few data points to extrapolate from...

3) a portion of sfqred I depended upon was just pulled out of the
mainline patch set due to some starvation issues observed. I just
plain don't see these issues with any test I come up with with sfq+red
enabled, but it was easy for the reporter to mess up sfq itself...

http://www.spinics.net/lists/netdev/msg191682.html

I'm gonna hate losing this in the general case, and unless I can make
it happen with sfqred, I'm going to keep this feature in cerowrt 3.3.

So... I'm still trying to get the lab built up...

>
> Best
>        Sebastian
>
>>
>> On Fri, Mar 16, 2012 at 6:40 PM, Richard Brown
>> <richard.e.brown@dartware.com> wrote:
>>> I just installed 3.3-rc7-5 and all seems fine in my limited testing.
>>>
>>> I see there's a Network -> AQM tab in the GUI. It seems similar to the QoS tab.
>>>
>>> Should the doc's tell people to use the same procedure as the QoS? That is, should people
>>>
>>> - Disable QoS (the default is disabled)
>>> - Run a speed test, say on http://speedtest.net
>>> - Enable QoS, and set the download and upload speeds to a few percent less than the speedtest indicates.
>>>
>>> Thanks.
>>>
>>>
>>> _______________________________________________
>>> Cerowrt-devel mailing list
>>> Cerowrt-devel@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>>
>>
>>
>> --
>> Dave Täht
>> SKYPE: davetaht
>> US Tel: 1-239-829-5608
>> http://www.bufferbloat.net
>> _______________________________________________
>> Cerowrt-devel mailing list
>> Cerowrt-devel@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>



-- 
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
http://www.bufferbloat.net

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Cerowrt-devel] What version of CeroWrt is running at http://jupiter.lab.bufferbloat.net.
  2012-03-17  2:20       ` Dave Taht
  2012-03-17  2:30         ` Sebastian Moeller
@ 2012-03-17  3:00         ` Richard Brown
  1 sibling, 0 replies; 11+ messages in thread
From: Richard Brown @ 2012-03-17  3:00 UTC (permalink / raw)
  To: Dave Taht; +Cc: Richard Brown, <cerowrt-devel@lists.bufferbloat.net>

Dave,

> Heh. It's the main router for the lab... it runs dns... it has a vpn
> on it... it's running rc6... it's STABLE.
> 
> I've updated 4 (out of 9) of the machines in the lab thus far, and
> basically plan
> to cycle through them all with successive releases before updating jupiter.

OK. It's just my consistency-checkin' nature.

> If you have working ipv6, the external gui is available....
> 
> http://[2001:4f8:fff8:600::1]/cerowrt/ as one example

Not yet. I'm going to check out Hurricane Electric's http://tunnelbroker.net this weekend.

> http://europa.lab.bufferbloat.net as another (both ipv6 and ipv4)
> 
> I'd appreciate knowing that port 81 (the router's configuration web
> server) and ssh were indeed blocked
> from the outside world.....

Clicking the link on Europa goes to this URL: http://europa.lab.bufferbloat.net/bgi-bin/redir.sh, which tries to go to http://europa.lab.bufferbloat.net:81, which ultimately fails. Same thing with Jupiter.

You can also test this using my favorite web-site testing tool, Rex Swain's HTTP Viewer at: http://www.rexswain.com/httpview.html It shows the complete HTTP transaction along with redirects to see what's actually happening.

re: SSH...  

ssh root@jupiter.lab.bufferbloat.net gives 'ssh_exchange_identification: Connection closed by remote host' 
ssh root@europa.lab.bufferbloat.net gives 'ssh: connect to host europa.lab.bufferbloat.net port 22: No route to host' (!) 

but dig shows europa.lab.bufferbloat.net. 82758 IN	A	149.20.63.19, so I'm not sure what that implies.

Rich

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Cerowrt-devel] Link to SFQRED description?
  2012-03-17  2:48       ` Dave Taht
@ 2012-03-17  3:03         ` Richard Brown
  2012-03-17  3:11           ` Dave Taht
  0 siblings, 1 reply; 11+ messages in thread
From: Richard Brown @ 2012-03-17  3:03 UTC (permalink / raw)
  To: Dave Taht; +Cc: Richard Brown, <cerowrt-devel@lists.bufferbloat.net>

Anyone have a link to a description to the SFQRED queue discipline? I'd like to put it onto the wiki.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Cerowrt-devel] Link to SFQRED description?
  2012-03-17  3:03         ` [Cerowrt-devel] Link to SFQRED description? Richard Brown
@ 2012-03-17  3:11           ` Dave Taht
  0 siblings, 0 replies; 11+ messages in thread
From: Dave Taht @ 2012-03-17  3:11 UTC (permalink / raw)
  To: Richard Brown; +Cc: <cerowrt-devel@lists.bufferbloat.net>

Definately need to work on a good description of that.

commit ddecf0f4db44ef94847a62d6ecf74456b4dcc66f
Author: Eric Dumazet <eric.dumazet@gmail.com>
Date:   Fri Jan 6 06:31:44 2012 +0000

    net_sched: sfq: add optional RED on top of SFQ

    Adds an optional Random Early Detection on each SFQ flow queue.

    Traditional SFQ limits count of packets, while RED permits to also
    control number of bytes per flow, and adds ECN capability as well.

    1) We dont handle the idle time management in this RED implementation,
    since each 'new flow' begins with a null qavg. We really want to address
    backlogged flows.

    2) if headdrop is selected, we try to ecn mark first packet instead of
    currently enqueued packet. This gives faster feedback for tcp flows
    compared to traditional RED [ marking the last packet in queue ]

    Example of use :

    tc qdisc add dev $DEV parent 1:1 handle 10: est 1sec 4sec sfq \
        limit 3000 headdrop flows 512 divisor 16384 \
        redflowlimit 100000 min 8000 max 60000 probability 0.20 ecn

    qdisc sfq 10: parent 1:1 limit 3000p quantum 1514b depth 127 headdrop
    flows 512/16384 divisor 16384
     ewma 6 min 8000b max 60000b probability 0.2 ecn
     prob_mark 0 prob_mark_head 4876 prob_drop 6131
     forced_mark 0 forced_mark_head 0 forced_drop 0
     Sent 1175211782 bytes 777537 pkt (dropped 6131, overlimits 11007
    requeues 0)
     rate 99483Kbit 8219pps backlog 689392b 456p requeues 0

    In this test, with 64 netperf TCP_STREAM sessions, 50% using ECN enabled
    flows, we can see number of packets CE marked is smaller than number of
    drops (for non ECN flows)

    If same test is run, without RED, we can check backlog is much bigger.

    qdisc sfq 10: parent 1:1 limit 3000p quantum 1514b depth 127 headdrop
    flows 512/16384 divisor 16384
     Sent 1148683617 bytes 795006 pkt (dropped 0, overlimits 0 requeues 0)


Which builds on this:

commit 18cb809850fb499ad9bf288696a95f4071f73931
Author: Eric Dumazet <eric.dumazet@gmail.com>
Date:   Wed Jan 4 14:18:38 2012 +0000

    net_sched: sfq: extend limits

    SFQ as implemented in Linux is very limited, with at most 127 flows
    and limit of 127 packets. [ So if 127 flows are active, we have one
    packet per flow ]

    This patch brings to SFQ following features to cope with modern needs.

    - Ability to specify a smaller per flow limit of inflight packets.
        (default value being at 127 packets)

    - Ability to have up to 65408 active flows (instead of 127)

    - Ability to have head drops instead of tail drops
      (to drop old packets from a flow)

    Example of use : No more than 20 packets per flow, max 8000 flows, max
    20000 packets in SFQ qdisc, hash table of 65536 slots.

    tc qdisc add ... sfq \
            flows 8000 \
            depth 20 \
            headdrop \
            limit 20000 \
        divisor 65536

    Ram usage :

    2 bytes per hash table entry (instead of previous 1 byte/entry)
    32 bytes per flow on 64bit arches, instead of 384 for QFQ, so much
    better cache hit ratio.

    Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
    CC: Dave Taht <dave.taht@gmail.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>


which builds on this:

commit 02a9098ede0dc7e28c16a03fa7fba86a05219478
Author: Eric Dumazet <eric.dumazet@gmail.com>
Date:   Wed Jan 4 06:23:01 2012 +0000

    net_sched: sfq: always randomize hash perturbation

    SFQ q->perturbation is used in sfq_hash() as an input to Jenkins hash.

    We currently randomize this 32bit value only if a perturbation timer is
    setup.

    Its much better to always initialize it to defeat attackers, or else
    they can predict very well what kind of packets they have to forge to
    hit a particular flow.

    Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>


which build on this (which is now pulled)

commit d47a0ac7b66883987275598d6039f902f4410ca9
Author: Eric Dumazet <eric.dumazet@gmail.com>
Date:   Sun Jan 1 18:33:31 2012 +0000

    sch_sfq: dont put new flow at the end of flows

    SFQ enqueue algo puts a new flow _behind_ all pre-existing flows in the
    circular list. In fact this is probably an old SFQ implementation bug.

    100 Mbits = ~8333 full frames per second, or ~8 frames per ms.

    With 50 flows, it means your "new flow" will have to wait 50 packets
    being sent before its own packet. Thats the ~6ms.

    We certainly can change SFQ to give a priority advantage to new flows,
    so that next dequeued packet is taken from a new flow, not an old one.

    Reported-by: Dave Taht <dave.taht@gmail.com>
    Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>


This was an idea discarded in the original sfq paper in 1990 as 'too
computationally intensive'

but as queues got deeper became a bigger problem...


http://www.coverfire.com/archives/2009/06/28/linux-sfq-experimentation/

so fixed here:

commit 225d9b89c937633dfeec502741a174fe0bab5b9f
Author: Eric Dumazet <eric.dumazet@gmail.com>
Date:   Wed Dec 21 03:30:11 2011 +0000

    sch_sfq: rehash queues in perturb timer

    A known Out Of Order (OOO) problem hurts SFQ when timer changes
    perturbation value, since all new packets delivered to SFQ enqueue might
    end on different slots than previous in-flight packets.

    With round robin delivery, we can thus deliver packets in a different
    order.

    Since SFQ is limited to small amount of in-flight packets, we can rehash
    packets so that this OOO problem is fixed.

    This rehashing is performed only if internal flow classifier is in use.

    We now store in skb->cb[] the "struct flow_keys" so that we dont call
    skb_flow_dissect() again while rehashing.

    Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>


On Fri, Mar 16, 2012 at 8:03 PM, Richard Brown
<richard.e.brown@dartware.com> wrote:
> Anyone have a link to a description to the SFQRED queue discipline? I'd like to put it onto the wiki.



-- 
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
http://www.bufferbloat.net

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Cerowrt-devel] 3.3rc7-5 is out
@ 2012-03-16  8:11 Dave Taht
  0 siblings, 0 replies; 11+ messages in thread
From: Dave Taht @ 2012-03-16  8:11 UTC (permalink / raw)
  To: cerowrt-devel

I had some issues in getting this build done. It won't be buildable by
others, either. There was some churn in the related patch sets. In
particular some xtables stuff broke that I'm not going to check my
hacks for into the repo.

I needed something stable enough for ietf next week and I hope this is
it. If it isn't, well... we'll see.

Notes:

0) this has the latest and greatest dnsmasq in it, which has
increasing amounts of dhcpv6 and ra support in it.

1) The configuration for 6to4 is now disabled by default. About the
only place it ever worked well was on comcast. Recently a new bug
cropped up where it was only distributing one interface address...

And it irked me in the lab, where I have native ipv6, and am working on PD...

(it can still be enabled via the gui or by uncommenting the stuff)

2) As noted in a previous missive the default ipv6 firewall rules need work.


-- 
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
http://www.bufferbloat.net

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2012-03-17  3:11 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <mailman.2.1331924401.3675.cerowrt-devel@lists.bufferbloat.net>
2012-03-17  1:40 ` [Cerowrt-devel] 3.3rc7-5 is out Richard Brown
2012-03-17  1:45   ` Dave Taht
2012-03-17  2:04     ` [Cerowrt-devel] What version of CeroWrt is running at http://jupiter.lab.bufferbloat.net Richard Brown
2012-03-17  2:20       ` Dave Taht
2012-03-17  2:30         ` Sebastian Moeller
2012-03-17  3:00         ` Richard Brown
2012-03-17  2:32     ` [Cerowrt-devel] 3.3rc7-5 is out Sebastian Moeller
2012-03-17  2:48       ` Dave Taht
2012-03-17  3:03         ` [Cerowrt-devel] Link to SFQRED description? Richard Brown
2012-03-17  3:11           ` Dave Taht
2012-03-16  8:11 [Cerowrt-devel] 3.3rc7-5 is out Dave Taht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox