Obie Fernandez on 27 Jul 2006 17:55:15 -0000

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PhillyOnRails] Rails Diatribe

I just spoke with Prag Dave in the hallway here at OSCON and he told
me he found Greg Luck's post to be "libelous"... :-/

On 7/27/06, Brian McCallister <> wrote:
On Jul 26, 2006, at 9:51 PM, Aaron Blohowiak wrote:

> Can you tell us more about the http keepalive bit?

This comment in mongrel.rb sums it up nicely:

   # A design decision was made to force the client to not pipeline
requests.  HTTP/1.1
   # pipelining really kills the performance due to how it has to be
handled and how
   # unclear the standard is.  To fix this the HttpResponse gives a
"Connection: close"
   # header which forces the client to close right away.  The bonus
for this is that it
   # gives a pretty nice speed boost to most clients since they can
close their connection
   # immediately.

Interestingly, the whole HTTP status line and first couple headers
are a constant, frozen, string -- short of patching mongrel or using
your own TCP connection handling in your Handler, it *will* close the
connection a la HTTP 1.0.

I think Zed is an amazing (really, really amazing) programmer. I
presume something in the design of Mongrel makes it tough to
implement keep-alives. Saying that they are poorly specced, or hurt
performance for the protocol or client is, umh, wrong, however. I
have a hunch that the reason comes back to the choice to actually
build an HTTP grammar with the ragel FSM compiler and making the FSM
clean across requests on a single connection has proven difficult.

If you think of mongrel as being designed to run fairly big sites
with one dynamic element and mostly static elements, and then this
decision works. Basically you have mongrel serve the dynamic page
(possibly from rails) and go ahead and close the connection because
you *know* the same server isn't going to receive a followup resource
request immediately, those are handled by servers optimized for that,
or by a content distribution network. In this case the
Connection:close on the initial request makes sense, the browser is
going to be opening additional connections to a different host (or
hosts for a CDN, or round-robined static setup) which will pipeline
requests for resources.

Yahoo! is a good example of this, we see the initial response headers
for the front page, made against, return the
"Connection: close" header:


HTTP/1.x 200 OK
Date: Thu, 27 Jul 2006 16:53:56 GMT
P3P: policyref="";, ...
Vary: User-Agent
Cache-Control: private
Set-Cookie: FPB=3r0o6jmqh12chrt4; expires=Thu, 01 ....
Set-Cookie: D=_ylh=X3oDMTFmdWZsNGY1BF9TAzI ...
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html
Content-Encoding: gzip

but subsequent image loads are made against their CDN, with hosts
such as, and do pipeline:


HTTP/1.x 200 OK
Last-Modified: Thu, 11 May 2006 20:46:13 GMT
Accept-Ranges: bytes
Content-Length: 7857
Content-Type: image/png
Cache-Control: max-age=2345779
Date: Thu, 27 Jul 2006 16:53:56 GMT
Connection: keep-alive
Expires: Thu, 12 May 2016 20:29:52 GMT


HTTP/1.x 200 OK
Last-Modified: Wed, 27 Jul 2005 00:18:07 GMT
Etag: "29990d3-122-42e6d2bf"
Accept-Ranges: bytes
Content-Length: 290
Content-Type: image/gif
Cache-Control: max-age=979924
Date: Thu, 27 Jul 2006 16:53:56 GMT
Connection: keep-alive
Expires: Sat, 01 Aug 2015 00:46:23 GMT

And so on...

This works great for people who need it and have the expertise to use
it, but doesn't scale down as well for folks that don't have or need
an array of specialized web servers and configs -- which would be
most folks. You can use a vanilla mod_proxy configuration and at
least put apache in front to help, but then you still need to have
pretty good apache config know-how to keep it to only one or two
handshakes for the initial request and resource requests and not a
get a big performance hit when used over the internet which won't be
apparent when doing local, or even same-lan based development and

Mongrel rocks as a highly specialized tool, but the choice to flat
out disable pipelining requires understanding HTTP reasonably well in
order to not shoot yourself in the foot when it goes to production
instead of a local network.

Anyway, sorry I got up on the soap box, hopefully ya'll don't mind
too much :-)


ps: If I recall, Zed wrote the SCGI rails runner, not mod_scgi which, I think, Neil Schemenauer wrote for python of all things.

> Aaron Blohowiak
>> mod_ruby exists, but it isn't a good option for any high load app
>> as it requires pre-fork, the ruby VM is not thread safe. This
>> means a ruby VM per process, which starts to eat memory *very*
>> quickly.
>> FastCGI is a kinda weird, but very reasonable protocol for this
>> kind of front-end <--> app server communication. FastCGI has a bit
>> of a bad rap in the RoR world because folks have tried to use the
>> very unmaintained mod_fcgi with Apache 2.0, which it compiles
>> against, but doesn't work very well against. The much better
>> mod_fcgid does a great job, and the upcoming mod_proxy_fcgi should
>> be great.
>> You could do a shared-memory or domain socket multiplexing a la
>> mod_perl, but no one has really wanted to do a new mod_ruby which
>> supports that. Wouldn't be awful to do, but mod_fcgid, mod_scgi,
>> mod_proxy_http to mongel or lighttpd (which then uses fcgi) all
>> work well -- as does Apache 1.3 and mod_fcgi (if it lacks the
>> features, scaling ability, and niceness of apache 2.0/2.2). If I
>> had the time I'd probably be all over doing worker-mpm compatiible
>> mod_ruby, but alas -- scgi (less efficient, but simpler than fcgi)
>> serves my needs fine, so other itches get scratched.
>> Anyway, my 2p on it. If mongrel would support http keepalives...
>> oh well. If you are using mod_proxy to a local http instance it
>> should be fine :-)
>> -Brian
>> _______________________________________________
>> talk mailing list
> _______________________________________________
> talk mailing list

talk mailing list

talk mailing list