* [Bloat] philosophical question
@ 2011-05-30 4:24 George B.
2011-05-30 7:53 ` Neil Davies
` (2 more replies)
0 siblings, 3 replies; 12+ messages in thread
From: George B. @ 2011-05-30 4:24 UTC (permalink / raw)
To: bloat
Ok, say I have a network with no over subscription in my net. I have
10G to the internet but am only using about 2G of that. This is the
server side of a network talking to millions of clients. The clients
in this case are on "lossy" wireless networks where packet loss is not
an indication of congestion so much as it is an indication that the
client moved 15 feet behind a pole and had poor network connectivity
for a few minutes.
The idea being that in today's internet, packet loss is not a good
indication of congestion. Often it just means that the radio signal
has been briefly interrupted. What I need is something that can tell
the difference between real congestion and radio loss. ECN seems to
be the way forward in that respect.
But assuming my network, as a server of content is not over
subscribed, what would you suggest as the best qdisc for such a
traffic profile? In other words, I am looking at this from the server
aspect rather than from the client aspect.
g
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bloat] philosophical question
2011-05-30 4:24 [Bloat] philosophical question George B.
@ 2011-05-30 7:53 ` Neil Davies
2011-05-30 12:25 ` Dave Taht
2011-05-31 18:07 ` Bill Sommerfeld
2 siblings, 0 replies; 12+ messages in thread
From: Neil Davies @ 2011-05-30 7:53 UTC (permalink / raw)
To: George B.; +Cc: bloat
FIFO and traffic shaping - every time
Neil
On 30 May 2011, at 05:24, George B. wrote:
> Ok, say I have a network with no over subscription in my net. I have
> 10G to the internet but am only using about 2G of that. This is the
> server side of a network talking to millions of clients. The clients
> in this case are on "lossy" wireless networks where packet loss is not
> an indication of congestion so much as it is an indication that the
> client moved 15 feet behind a pole and had poor network connectivity
> for a few minutes.
>
> The idea being that in today's internet, packet loss is not a good
> indication of congestion. Often it just means that the radio signal
> has been briefly interrupted. What I need is something that can tell
> the difference between real congestion and radio loss. ECN seems to
> be the way forward in that respect.
>
> But assuming my network, as a server of content is not over
> subscribed, what would you suggest as the best qdisc for such a
> traffic profile? In other words, I am looking at this from the server
> aspect rather than from the client aspect.
>
> g
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bloat] philosophical question
2011-05-30 4:24 [Bloat] philosophical question George B.
2011-05-30 7:53 ` Neil Davies
@ 2011-05-30 12:25 ` Dave Taht
2011-05-30 15:29 ` George B.
2011-05-31 18:07 ` Bill Sommerfeld
2 siblings, 1 reply; 12+ messages in thread
From: Dave Taht @ 2011-05-30 12:25 UTC (permalink / raw)
To: George B.; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 1878 bytes --]
On Sun, May 29, 2011 at 10:24 PM, George B. <georgeb@gmail.com> wrote:
> Ok, say I have a network with no over subscription in my net.
I'd love to see one of those. Can I get on it?
> I have
> 10G to the internet but am only using about 2G of that. This is the
> server side of a network talking to millions of clients. The clients
> in this case are on "lossy" wireless networks where packet loss is not
> an indication of congestion so much as it is an indication that the
> client moved 15 feet behind a pole and had poor network connectivity
> for a few minutes.
>
> Or is using multicast.
> The idea being that in today's internet, packet loss is not a good
> indication of congestion. Often it just means that the radio signal
> has been briefly interrupted. What I need is something that can tell
> the difference between real congestion and radio loss. ECN seems to
> be the way forward in that respect.
>
> Yes. When it works. Which is rarely.
> But assuming my network, as a server of content is not over
> subscribed, what would you suggest as the best qdisc for such a
> traffic profile? In other words, I am looking at this from the server
> aspect rather than from the client aspect.
>
>
Ah, ok. This was discussed in this loooong thread:
https://lists.bufferbloat.net/pipermail/bloat/2011-March/000272.html
Some form of fair queuing distributes the load to the ultimate end nodes
better.
As for which packet scheduler to choose for that? Don't know, I'm just
trying to get to where we can actually test stuff on the edge gateways at
this point.
> g
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
--
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
http://the-edge.blogspot.com
[-- Attachment #2: Type: text/html, Size: 3134 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bloat] philosophical question
2011-05-30 12:25 ` Dave Taht
@ 2011-05-30 15:29 ` George B.
2011-05-30 15:57 ` Jonathan Morton
2011-05-30 17:05 ` Dave Taht
0 siblings, 2 replies; 12+ messages in thread
From: George B. @ 2011-05-30 15:29 UTC (permalink / raw)
To: Dave Taht; +Cc: bloat
On Mon, May 30, 2011 at 5:25 AM, Dave Taht <dave.taht@gmail.com> wrote:
>
>
> On Sun, May 29, 2011 at 10:24 PM, George B. <georgeb@gmail.com> wrote:
>>
>> Ok, say I have a network with no over subscription in my net.
>
> I'd love to see one of those. Can I get on it?
Well, we currently have the potential for some microburst oversub
inside the data center but not too much of it. I can take a 48-port
GigE switch and have 40G of uplink but the switches aren't fully
populated yet. Bottlenecks are currently where we might have 25 front
end servers talking on GigE to a backend server with 20G. So some
potential for internal microburst oversub but that's beyond the scope
of this discussion.
>>
>> I have
>> 10G to the internet but am only using about 2G of that. This is the
>> server side of a network talking to millions of clients. The clients
>> in this case are on "lossy" wireless networks where packet loss is not
>> an indication of congestion so much as it is an indication that the
>> client moved 15 feet behind a pole and had poor network connectivity
>> for a few minutes.
>>
> Or is using multicast.
Multicast is a fact of life with which one is going to have to learn
to live. Better to somehow get the gear handling it in a better
fashion, in my opinion.
>> The idea being that in today's internet, packet loss is not a good
>> indication of congestion. Often it just means that the radio signal
>> has been briefly interrupted. What I need is something that can tell
>> the difference between real congestion and radio loss. ECN seems to
>> be the way forward in that respect.
>>
> Yes. When it works. Which is rarely.
I have enabled ECN (been following various bufferbloat discussions for
a while) on a couple of machines and also my own machine (my own in
order to see where it might cause any problems browsing) without any
problems so far. "Back in the day" when ECN first came out on Linux,
it was enabled by default and caused all sorts of issues with sites
that simply drop packets with either/any of the ECN bits set. So far
there haven't been any issues that I have run into with ECN set on my
Windows laptop. Once I am convinced that setting that those bits
isn't going to cause problems, I will roll that out in a more general
fashion. But if networks upstream from us clear those bits anyway, I'm
not convinced what difference it will make.
There is also one fairly small subnet in the overall network where I
have enabled "random-detect ecn" with a policy map on a potentially
oversubscribed link. But that is the only router in the network that
even supports ECN. I have sent an inquiry to the manufacturer of the
rest of the gear about supporting ECN with their WRED implementation
but haven't heard anything from them on the subject.
>> But assuming my network, as a server of content is not over
>> subscribed, what would you suggest as the best qdisc for such a
>> traffic profile? In other words, I am looking at this from the server
>> aspect rather than from the client aspect.
>>
>
> Ah, ok. This was discussed in this loooong thread:
>
> https://lists.bufferbloat.net/pipermail/bloat/2011-March/000272.html
>
> Some form of fair queuing distributes the load to the ultimate end nodes
> better.
Ok, as we are using Linux (mostly) for the servers talking to the
clients, it shouldn't be much of an issue to put into place. Thanks
to the pointer to the thread and I will watch as things develop and
see how things go.
> As for which packet scheduler to choose for that? Don't know, I'm just
> trying to get to where we can actually test stuff on the edge gateways at
> this point.
Yeah, what I am most interested in are things like smart
phones/laptops/tablets and not necessarily on WiFi but also on 3/4g
networks. Those things are pulling a lot of traffic these days and the
network can be lossy at times. From my own analysis of traffic
captures, it is fairly easy to see when a device that is "on the move"
changes cell towers. You get a burst of resends and often some out of
order packets and then things settle down for a while. This isn't so
big of a deal if you have only a few mobile clients but sites that
cater to mobile content might have millions of such clients connected
at any given time with many of them in a state where they have
marginal connectivity or are in the process of moving between towers.
So the TCP notion that "packet loss == congestion" doesn't apply in
those networks. With those, packet loss is just packet loss and
shouldn't be treated as congestion. This is why I think it is so
important to get ECN working across the Internet. But even with ECN
capable end points, if the routers in the middle are not capable of
using ECN to signal congestion and simply drop packets, there is
always a question of why the packet was lost.
We need to hammer on our vendors a bit and get them properly
supporting ECN to signal congestion on ECN aware flows.
> Dave Täht
Thanks, all!
g
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bloat] philosophical question
2011-05-30 15:29 ` George B.
@ 2011-05-30 15:57 ` Jonathan Morton
2011-05-31 17:20 ` George B.
2011-05-30 17:05 ` Dave Taht
1 sibling, 1 reply; 12+ messages in thread
From: Jonathan Morton @ 2011-05-30 15:57 UTC (permalink / raw)
To: George B.; +Cc: bloat
If most of your clients are mobile, you should use a tcp congestion control algorithm such as Westwood+ which is designed for the task. This is designed to distinguish between congestion and random packet losses. It is much less aggressive at filling buffers than the default CUBIC.
Your main bottleneck even at 2Gbps is at the uplink to the ISP. That is where you need an AQM capable router. You have no control over what happens further into the Internet except by turning on ECN. IMHO that is reasonably safe already and more people should do it, but you would be quite justified in running trials and listening for trouble.
What ECN probably needs is a statement from several major players - that is Red Hat, Canonical, Linus, Apple, Microsoft - that they will unilaterally turn on ECN by default in releases and updates after some flag day. It has, after all, been in RFC and implemented for ages, so any remaining broken networks that actually block ECN packets really have no excuse. Stripping ECN is a slightly less serious problem which will be easier to address afterwards.
If your internal bottleneck is a single dumb switch which supports PAUSE, you shouldn't have much trouble and a basic AQM such as SFQ on your servers may be sufficient.
The key to knowledge is not to rely on others to teach you it.
On 30 May 2011, at 18:29, "George B." <georgeb@gmail.com> wrote:
> On Mon, May 30, 2011 at 5:25 AM, Dave Taht <dave.taht@gmail.com> wrote:
>>
>>
>> On Sun, May 29, 2011 at 10:24 PM, George B. <georgeb@gmail.com> wrote:
>>>
>>> Ok, say I have a network with no over subscription in my net.
>>
>> I'd love to see one of those. Can I get on it?
>
> Well, we currently have the potential for some microburst oversub
> inside the data center but not too much of it. I can take a 48-port
> GigE switch and have 40G of uplink but the switches aren't fully
> populated yet. Bottlenecks are currently where we might have 25 front
> end servers talking on GigE to a backend server with 20G. So some
> potential for internal microburst oversub but that's beyond the scope
> of this discussion.
>
>>>
>>> I have
>>> 10G to the internet but am only using about 2G of that. This is the
>>> server side of a network talking to millions of clients. The clients
>>> in this case are on "lossy" wireless networks where packet loss is not
>>> an indication of congestion so much as it is an indication that the
>>> client moved 15 feet behind a pole and had poor network connectivity
>>> for a few minutes.
>>>
>> Or is using multicast.
>
> Multicast is a fact of life with which one is going to have to learn
> to live. Better to somehow get the gear handling it in a better
> fashion, in my opinion.
>
>>> The idea being that in today's internet, packet loss is not a good
>>> indication of congestion. Often it just means that the radio signal
>>> has been briefly interrupted. What I need is something that can tell
>>> the difference between real congestion and radio loss. ECN seems to
>>> be the way forward in that respect.
>>>
>> Yes. When it works. Which is rarely.
>
> I have enabled ECN (been following various bufferbloat discussions for
> a while) on a couple of machines and also my own machine (my own in
> order to see where it might cause any problems browsing) without any
> problems so far. "Back in the day" when ECN first came out on Linux,
> it was enabled by default and caused all sorts of issues with sites
> that simply drop packets with either/any of the ECN bits set. So far
> there haven't been any issues that I have run into with ECN set on my
> Windows laptop. Once I am convinced that setting that those bits
> isn't going to cause problems, I will roll that out in a more general
> fashion. But if networks upstream from us clear those bits anyway, I'm
> not convinced what difference it will make.
>
> There is also one fairly small subnet in the overall network where I
> have enabled "random-detect ecn" with a policy map on a potentially
> oversubscribed link. But that is the only router in the network that
> even supports ECN. I have sent an inquiry to the manufacturer of the
> rest of the gear about supporting ECN with their WRED implementation
> but haven't heard anything from them on the subject.
>
>>> But assuming my network, as a server of content is not over
>>> subscribed, what would you suggest as the best qdisc for such a
>>> traffic profile? In other words, I am looking at this from the server
>>> aspect rather than from the client aspect.
>>>
>>
>> Ah, ok. This was discussed in this loooong thread:
>>
>> https://lists.bufferbloat.net/pipermail/bloat/2011-March/000272.html
>>
>> Some form of fair queuing distributes the load to the ultimate end nodes
>> better.
>
> Ok, as we are using Linux (mostly) for the servers talking to the
> clients, it shouldn't be much of an issue to put into place. Thanks
> to the pointer to the thread and I will watch as things develop and
> see how things go.
>
>> As for which packet scheduler to choose for that? Don't know, I'm just
>> trying to get to where we can actually test stuff on the edge gateways at
>> this point.
>
> Yeah, what I am most interested in are things like smart
> phones/laptops/tablets and not necessarily on WiFi but also on 3/4g
> networks. Those things are pulling a lot of traffic these days and the
> network can be lossy at times. From my own analysis of traffic
> captures, it is fairly easy to see when a device that is "on the move"
> changes cell towers. You get a burst of resends and often some out of
> order packets and then things settle down for a while. This isn't so
> big of a deal if you have only a few mobile clients but sites that
> cater to mobile content might have millions of such clients connected
> at any given time with many of them in a state where they have
> marginal connectivity or are in the process of moving between towers.
> So the TCP notion that "packet loss == congestion" doesn't apply in
> those networks. With those, packet loss is just packet loss and
> shouldn't be treated as congestion. This is why I think it is so
> important to get ECN working across the Internet. But even with ECN
> capable end points, if the routers in the middle are not capable of
> using ECN to signal congestion and simply drop packets, there is
> always a question of why the packet was lost.
>
> We need to hammer on our vendors a bit and get them properly
> supporting ECN to signal congestion on ECN aware flows.
>
>> Dave Täht
>
>
> Thanks, all!
>
>
>
> g
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bloat] philosophical question
2011-05-30 15:29 ` George B.
2011-05-30 15:57 ` Jonathan Morton
@ 2011-05-30 17:05 ` Dave Taht
1 sibling, 0 replies; 12+ messages in thread
From: Dave Taht @ 2011-05-30 17:05 UTC (permalink / raw)
To: George B., bloat
[-- Attachment #1: Type: text/plain, Size: 1684 bytes --]
On Mon, May 30, 2011 at 9:29 AM, George B. <georgeb@gmail.com> wrote:
> On Mon, May 30, 2011 at 5:25 AM, Dave Taht <dave.taht@gmail.com> wrote:
> >
> >
> > On Sun, May 29, 2011 at 10:24 PM, George B. <georgeb@gmail.com> wrote:
> >>
> >> Ok, say I have a network with no over subscription in my net.
> >
> > I'd love to see one of those. Can I get on it?
>
> Well, we currently have the potential for some microburst oversub
> inside the data center but not too much of it. I can take a 48-port
> GigE switch and have 40G of uplink but the switches aren't fully
> populated yet. Bottlenecks are currently where we might have 25 front
> end servers talking on GigE to a backend server with 20G. So some
> potential for internal microburst oversub but that's beyond the scope
> of this discussion.
>
>
I was serious about asking to get on it. We're trying to get
measurement/test servers in place everywhere we can, so that we can more
deeply analyze the problems the bufferbloat is causing and find more ways to
mitigate it.
As one example, I've been coping with dramatic overbuffering in the GigE
switch we are using on the wndr3700v2s, and working on ways to combat it, as
recent threads on the bloat and bismark-devel lists.
Going SERIOUSLY upstream from a piece of low end consumer gear
to something heavier duty to try and nip this problem in the bud there,
would do much to undo the damage that the change to 1000 deep txqueuelens
did to the world, when GiGE first deployed in the data center about 6 years
ago and then migrated out to consumer gear.
--
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
http://the-edge.blogspot.com
[-- Attachment #2: Type: text/html, Size: 2260 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bloat] philosophical question
2011-05-30 15:57 ` Jonathan Morton
@ 2011-05-31 17:20 ` George B.
2011-05-31 21:40 ` Juliusz Chroboczek
0 siblings, 1 reply; 12+ messages in thread
From: George B. @ 2011-05-31 17:20 UTC (permalink / raw)
To: Jonathan Morton; +Cc: bloat
On Mon, May 30, 2011 at 8:57 AM, Jonathan Morton <chromatix99@gmail.com> wrote:
> If most of your clients are mobile, you should use a tcp congestion control algorithm such as Westwood+ which is designed for the task. This is designed to distinguish between congestion and random packet losses. It is much less aggressive at filling buffers than the default CUBIC.
Not only are they mobile, the behavior might be considered like that
of a "thin" client in the context of "tcp-thin"
http://www.mjmwired.net/kernel/Documentation/networking/tcp-thin.txt
So the servers have two different sorts of connections. There would
be thousands of long-lived connections with only an occasional packet
going back and forth. Those streams are on lossy mobile networks.
Then there are several hundred very "fat" and fast connections moving
a lot of data. Sometimes a client might change from a "thin" stream
to a "thick" stream if it must collect a lot of content.
So maybe westwood+ along with the "tcp-thin" settings in 2.6.38 might
be a good idea.
Looking at one server this morning, I have about 27,000 TCP
connections in ESTABLISHED state. Many (most) of these are "thin"
flows to devices that exchange a packet only occasionally. The server
has 16 cores talking to 2 GigE NICs with 8 queues each. There is
about 40 meg of traffic flowing into the server from the network and
the outbound bandwidth is about 8 meg/sec. Much of that 8 meg (about
2.5 meg) is logging traffic going to a local log host via IPv6 + jumbo
frames.
The way this is configured is there are two NICs, eth0 and eth1 and
three vlans, lets call them vlan 2 (front end traffic) vlan 3 (logging
traffic) and vlan 4 (backend traffic). VLAN 2 would be configured on
the NICs (eth0.2 and eth1.2) and then bonded using balance-xor using
the layer2-3 xmit hash. This way a given flow should always hash to a
given vlan interface on a particular NIC. So I have three bond
interfaces talking to a multiqueue aware vlan driver. This allows one
processor to handle log traffic while a different processor handles
front-end traffic and another processor handing a backend transaction
at the same time.
The higher inbound to outbound ratio is backwards from your
traditional server profile but that is because the traffic that comes
in gets compressed and sent back out and it is mostly text and text
compresses nicely.
/proc/sys/net/ipv4/tcp_ecn is currently set to "2" meaning use ECN if
I see ECN set from the other end but don't initiate connections with
ECN set. As I never initiate connections to the clients, that really
isn't an issue. The client always initiates the connection so if I
see ECN, it will be used.
So the exercise I am going through is trying to determine the best
qdisc and where to put it. On the bond interfaces, on the vlan
interfaces or on the NICs? Something simple like SFQ would probably
work ok. I just want to make sure a single packet to the client can
get through without a lot of delay in the face of a "fat" stream going
somewhere else. Currently it is using the default mq qdisc:
root@foo:~> tc -s qdisc show
qdisc mq 0: dev eth0 root
Sent 5038416030858 bytes 1039959904 pkt (dropped 0, overlimits 0
requeues 24686)
rate 0bit 0pps backlog 0b 0p requeues 24686
qdisc mq 0: dev eth1 root
Sent 1380477553077 bytes 2131951760 pkt (dropped 0, overlimits 0 requeues 2934)
rate 0bit 0pps backlog 0b 0p requeues 2934
So there have been no packets dropped and there is no backlog and the
path is clean all the way to the Internet without any congestion in my
network (the path is currently about 5 times bigger than current
bandwidth utilization and is 10GigE all the way from the switch to
which the server is connected all the way to the Internet). Any
congestion would be somewhere upstream from me.
Suggestions?
George
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bloat] philosophical question
2011-05-30 4:24 [Bloat] philosophical question George B.
2011-05-30 7:53 ` Neil Davies
2011-05-30 12:25 ` Dave Taht
@ 2011-05-31 18:07 ` Bill Sommerfeld
2011-05-31 19:17 ` Rick Jones
2011-05-31 19:28 ` George B.
2 siblings, 2 replies; 12+ messages in thread
From: Bill Sommerfeld @ 2011-05-31 18:07 UTC (permalink / raw)
To: George B.; +Cc: bloat
On Sun, May 29, 2011 at 21:24, George B. <georgeb@gmail.com> wrote:
> But assuming my network, as a server of content is not over
> subscribed, what would you suggest as the best qdisc for such a
> traffic profile? In other words, I am looking at this from the server
> aspect rather than from the client aspect.
Philosophical rhetorical question: If the bottlenecks are all outside
your network, where do you expect a queue to build up? Where are you
storing packets that can't be sent right away?
I'd think the TCP congestion control algorithms would be the thing to
worry about, rather than qdiscs...
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bloat] philosophical question
2011-05-31 18:07 ` Bill Sommerfeld
@ 2011-05-31 19:17 ` Rick Jones
2011-05-31 19:19 ` Jim Gettys
2011-05-31 19:28 ` George B.
1 sibling, 1 reply; 12+ messages in thread
From: Rick Jones @ 2011-05-31 19:17 UTC (permalink / raw)
To: Bill Sommerfeld; +Cc: bloat
On Tue, 2011-05-31 at 11:07 -0700, Bill Sommerfeld wrote:
> On Sun, May 29, 2011 at 21:24, George B. <georgeb@gmail.com> wrote:
> > But assuming my network, as a server of content is not over
> > subscribed, what would you suggest as the best qdisc for such a
> > traffic profile? In other words, I am looking at this from the server
> > aspect rather than from the client aspect.
>
> Philosophical rhetorical question: If the bottlenecks are all outside
> your network, where do you expect a queue to build up? Where are you
> storing packets that can't be sent right away?
>
> I'd think the TCP congestion control algorithms would be the thing to
> worry about, rather than qdiscs...
Definitely.
One of these days I really should write the "Farragut" congestion
control module - "Damn the losses, full speed ahead" :)
rick jones
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bloat] philosophical question
2011-05-31 19:17 ` Rick Jones
@ 2011-05-31 19:19 ` Jim Gettys
0 siblings, 0 replies; 12+ messages in thread
From: Jim Gettys @ 2011-05-31 19:19 UTC (permalink / raw)
To: bloat
On 05/31/2011 03:17 PM, Rick Jones wrote:
> On Tue, 2011-05-31 at 11:07 -0700, Bill Sommerfeld wrote:
>> On Sun, May 29, 2011 at 21:24, George B.<georgeb@gmail.com> wrote:
>>> But assuming my network, as a server of content is not over
>>> subscribed, what would you suggest as the best qdisc for such a
>>> traffic profile? In other words, I am looking at this from the server
>>> aspect rather than from the client aspect.
>> Philosophical rhetorical question: If the bottlenecks are all outside
>> your network, where do you expect a queue to build up? Where are you
>> storing packets that can't be sent right away?
>>
>> I'd think the TCP congestion control algorithms would be the thing to
>> worry about, rather than qdiscs...
> Definitely.
>
> One of these days I really should write the "Farragut" congestion
> control module - "Damn the losses, full speed ahead" :)
No need to bother: web servers and clients are already doing this to us,
with the N connections crossed by the initial congestion window changes :-(.
- Jim
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bloat] philosophical question
2011-05-31 18:07 ` Bill Sommerfeld
2011-05-31 19:17 ` Rick Jones
@ 2011-05-31 19:28 ` George B.
1 sibling, 0 replies; 12+ messages in thread
From: George B. @ 2011-05-31 19:28 UTC (permalink / raw)
To: Bill Sommerfeld; +Cc: bloat
> Philosophical rhetorical question: If the bottlenecks are all outside
> your network, where do you expect a queue to build up? Where are you
> storing packets that can't be sent right away?
>
> I'd think the TCP congestion control algorithms would be the thing to
> worry about, rather than qdiscs...
Yeah, that's where I'm leaning, too. westwood+ combined with the
tcp-thin should be the ticket, I would thing.
George
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bloat] philosophical question
2011-05-31 17:20 ` George B.
@ 2011-05-31 21:40 ` Juliusz Chroboczek
0 siblings, 0 replies; 12+ messages in thread
From: Juliusz Chroboczek @ 2011-05-31 21:40 UTC (permalink / raw)
To: George B.; +Cc: bloat
> So there have been no packets dropped and there is no backlog and the
> path is clean all the way to the Internet without any congestion in my
> network (the path is currently about 5 times bigger than current
> bandwidth utilization and is 10GigE all the way from the switch to
> which the server is connected all the way to the Internet). Any
> congestion would be somewhere upstream from me.
If you're not congested, then don't bother. Just reduce the amount of
bufferring as much as it will go without reducing throughput, and be
happy.
If you see congestion, then try to put an AQM at the bottleneck node.
If you cannot do that, you'll need to artificially throttle your server
(using tbf or htb, if Linux) in order to move the bottleneck, with all
the comfplexity and inefficiency that this entails.
-- Juliusz
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2011-05-31 21:24 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-05-30 4:24 [Bloat] philosophical question George B.
2011-05-30 7:53 ` Neil Davies
2011-05-30 12:25 ` Dave Taht
2011-05-30 15:29 ` George B.
2011-05-30 15:57 ` Jonathan Morton
2011-05-31 17:20 ` George B.
2011-05-31 21:40 ` Juliusz Chroboczek
2011-05-30 17:05 ` Dave Taht
2011-05-31 18:07 ` Bill Sommerfeld
2011-05-31 19:17 ` Rick Jones
2011-05-31 19:19 ` Jim Gettys
2011-05-31 19:28 ` George B.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox