[Bloat] bufferbloat and the web service providers

Jonathan Morton chromatix99 at gmail.com
Mon Nov 19 08:13:51 EST 2012


On 19 Nov, 2012, at 2:32 pm, Oliver Hohlfeld wrote:

>> (...) because the real problem was
>> bufferbloat - and then work together to move forward, on an
>> engineering, rather than political basis.
> (...)
>> I have long hoped to get these services aware
>> that they needed to help fix bufferbloat if they wanted their cloud
>> based businesses to work better.
> 
> Is there any evidence that they do suffer from bufferbloat? And if
> so, how many of their customers are affected? 0.001%? 0.1%? 10%?
> So, what is the economical extend of the problem for these services
> providers?

I see many websites whose bottleneck is the database containing their content, not the physical uplink.  They can easily be identified by the way they serve up database connection errors when under unusually heavy load.  For example, http://www.robertsspaceindustries.com/ about 12 hours ago, as they pushed through about $1M of pledges within 24 hours.  The Raspberry Pi site deliberately switched to a lightweight static page for their launch event back in February, thereby staying up while two major suppliers' sites (both operating very heavy e-commerce designs) crashed hard under the load.

Databases get used by a lot of websites these days, either as part of a forum or e-commerce backend (entirely reasonable) or for serving up standard but frequently updated content - where "frequently updated" means several times a day, rather than several times a second.  In this latter case, it would make far more sense to periodically bake the content into static pages that can be served without hitting the database.  This in turn would free up DB capacity for things that really need it, such as the forum and e-commerce transactions.

Only *then* does bufferbloat start to be relevant.

Even so, I am also constantly amazed by the lack of performance of most databases under a heavy read-mostly load.  A lot of it could doubtless be solved by tuning, but very few organisations seem to have the competence to do this.  A lot of it probably has to do with using general-purpose databases for unsuitable purposes - sure, you can store 19KB of text in a varchar field, and a quarter-megabyte image too, but that doesn't mean you should do s in a high-performance application.

There are, of course, many databases that are highly optimised, out of the box, for storing large text and image data on a read-mostly basis, even employing extensive caching to avoid unnecessary disk access.  They are commonly known as "filesystems".  :-)

 - Jonathan Morton




More information about the Bloat mailing list