[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200702281213.59695.dada1@cosmosbay.com>
Date: Wed, 28 Feb 2007 12:13:59 +0100
From: Eric Dumazet <dada1@...mosbay.com>
To: Davide Libenzi <davidel@...ilserver.org>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [patch] epoll reduced (to 1) number of passes over the ready set ...
On Tuesday 27 February 2007 17:03, Davide Libenzi wrote:
> On Tue, 27 Feb 2007, Eric Dumazet wrote:
> > On Tuesday 27 February 2007 03:32, Davide Libenzi wrote:
> > > Epoll is doing multiple passes over the ready set at the moment,
> > > because of the constraints over the f_op->poll() call. Looking at the
> > > code again, I noticed that we already hold the epoll semaphore in read,
> > > and this (together with other locking conditions that hold while doing
> > > an epoll_wait()) can lead to a smarter way to "ship" events to
> > > userspace (in a single pass). I added more (even) more comments to the
> > > code to explain the conditions why certain operations are safe.
> > > This is a stress application that can be used to test the new code. It
> > > spwans multiple thread and call epoll_wait() and epoll_ctl() from many
> > > threads. Stress tested on my dual Opteron 254 w/out any problems.
> >
> > Davide,
> >
> > This is really cool, because the size of epitem would fit now in 128
> > bytes instead of 192 (on x86_64 platforms). So we also reduce memory
> > usage.
>
> Yeah, I forgot to mention that I removed the txlink member.
I am pretty sure you can also remove revents member from epitem.
It would greatly benefit to 32bits platforms, because resulting size would be
64 bytes instead of 68 (so a 50 % reduction because of 64 bytes alignment)
Eric
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists