[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070301121020.GC20773@2ka.mipt.ru>
Date: Thu, 1 Mar 2007 15:10:22 +0300
From: Evgeniy Polyakov <johnpol@....mipt.ru>
To: Ingo Molnar <mingo@...e.hu>
Cc: Pavel Machek <pavel@....cz>, Theodore Tso <tytso@....edu>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ulrich Drepper <drepper@...hat.com>,
linux-kernel@...r.kernel.org,
Arjan van de Ven <arjan@...radead.org>,
Christoph Hellwig <hch@...radead.org>,
Andrew Morton <akpm@....com.au>,
Alan Cox <alan@...rguk.ukuu.org.uk>,
Zach Brown <zach.brown@...cle.com>,
"David S. Miller" <davem@...emloft.net>,
Suparna Bhattacharya <suparna@...ibm.com>,
Davide Libenzi <davidel@...ilserver.org>,
Jens Axboe <jens.axboe@...cle.com>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [patch 00/13] Syslets, "Threadlets", generic AIO support, v3
On Thu, Mar 01, 2007 at 12:47:35PM +0100, Ingo Molnar (mingo@...e.hu) wrote:
>
> * Ingo Molnar <mingo@...e.hu> wrote:
>
> > > I also changed client socket to nonblocking mode with the same result
> > > in epoll server. If you will find it broken, please send me corrected
> > > to test too.
> >
> > this line in evserver_kevent.c looks a bit fishy:
>
> this one in evserver_kevent.c looks problematic too:
>
> shutdown(s, SHUT_RDWR);
> close(s);
>
> as evserver_epoll.c only does:
>
> close(s);
>
> again, there might be TCP control flow differences due to this. [ Or the
> removal of this shutdown() call might be a small speedup for the kevent
> case ;) ]
:)
> Also, the order of fd and socket close() is different in the two cases.
> It shouldnt make any difference - but that too just makes the results
> harder to trust. Would it be so hard to introduce a single
> handle_web_request() function that is exactly the same in the two tests?
> All the queueing details (which are of course different in the epoll and
> the kevent case) should be in the client function, which calls
> handle_web_request().
I've removed shutdown - things are the same.
Sometimes kevent performance drops to lower numbers and its graph of
times needed to handle events has high platoes (with and without
shutdown - it was always), like this:
Percentage of the requests served within a certain time (ms)
50% 128
66% 486
75% 505
80% 507
90% 732
95% 3087 // something is wrong at this point
98% 9058
99% 9072
100% 15562 (longest request)
it is possible that threre are some other bugs in the server though,
which prevent sockets from being quicly closed and thus its processing
time increases - I do not know for sure the root of that event.
I separated epoll and kevent servers, since originally kevent server
included additional kevent features, but then new ones were added
and I moved slowly to the similar to epoll case.
Current version of the server was a pre-test one for lighttpd patches,
so essentially it should be like epoll except minor details.
> Ingo
--
Evgeniy Polyakov
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists