[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200702271015.33123.dada1@cosmosbay.com>
Date: Tue, 27 Feb 2007 10:15:33 +0100
From: Eric Dumazet <dada1@...mosbay.com>
To: Davide Libenzi <davidel@...ilserver.org>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [patch] epoll reduced (to 1) number of passes over the ready set ...
On Tuesday 27 February 2007 03:32, Davide Libenzi wrote:
> Epoll is doing multiple passes over the ready set at the moment, because
> of the constraints over the f_op->poll() call. Looking at the code again,
> I noticed that we already hold the epoll semaphore in read, and this
> (together with other locking conditions that hold while doing an
> epoll_wait()) can lead to a smarter way to "ship" events to userspace (in
> a single pass). I added more (even) more comments to the code to explain
> the conditions why certain operations are safe.
> This is a stress application that can be used to test the new code. It
> spwans multiple thread and call epoll_wait() and epoll_ctl() from many
> threads. Stress tested on my dual Opteron 254 w/out any problems.
Davide,
This is really cool, because the size of epitem would fit now in 128 bytes
instead of 192 (on x86_64 platforms). So we also reduce memory usage.
I have one comment :
> */
> - list_for_each(lnk, txlist) {
> - epi = list_entry(lnk, struct epitem, txlink);
> + for (eventcnt = 0; !list_empty(txlist) && eventcnt < maxevents;) {
> + epi = list_entry(txlist->next, struct epitem, rdllink);
Now that we scan the rdllist list once, it may be usefull to use a prefetch()
hint.
list_for_each() has one prefetch(pos->next) automatically included, but not
your open coded loop.
I suggest adding after epi = list_entry(txlist->next, struct epitem, rdllink);
prefetch(epi->rdllink.next);
Thank you
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists