[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <45662522.9090101@garzik.org>
Date: Thu, 23 Nov 2006 17:48:02 -0500
From: Jeff Garzik <jeff@...zik.org>
To: Ulrich Drepper <drepper@...hat.com>
CC: Evgeniy Polyakov <johnpol@....mipt.ru>,
David Miller <davem@...emloft.net>,
Andrew Morton <akpm@...l.org>, netdev <netdev@...r.kernel.org>,
Zach Brown <zach.brown@...cle.com>,
Christoph Hellwig <hch@...radead.org>,
Chase Venters <chase.venters@...entec.com>,
Johann Borck <johann.borck@...sedata.com>,
linux-kernel@...r.kernel.org
Subject: Re: [take25 1/6] kevent: Description.
Ulrich Drepper wrote:
> Evgeniy Polyakov wrote:
>> + int kevent_commit(int ctl_fd, unsigned int start, + unsigned int
>> num, unsigned int over);
>
> I think we can simplify this interface:
>
> int kevent_commit(int ctl_fd, unsigned int new_tail,
> unsigned int over);
>
> The kernel sets the ring_uidx value to the 'new_tail' value if the tail
> pointer would be incremented (module wrap around) and is not higher then
> the current front pointer. The test will be a bit complicated but not
> more so than what the current code has to do to check for mistakes.
>
> This approach has the advantage that the commit calls don't have to be
> synchronized. If one thread sets the tail pointer to, say, 10 and
> another to 12, then it does not matter whether the first thread is
> delayed. If it will eventually be executed the result is simply a no-op
> and since second thread's action supersedes it.
>
> Maybe the current form is even impossible to use with explicit locking
> at userlevel. What if one thread, which is about to call kevent_commit,
> if indefinitely delayed. Then this commit request's value is never
> taken into account and the tail pointer is always short of what it
> should be.
I'm really wondering is designing for N-threads-to-1-ring is the wisest
choice?
Considering current designs, it seems more likely that a single thread
polls for socket activity, then dispatches work. How often do you
really see in userland multiple threads polling the same set of fds,
then fighting to decide who will handle raised events?
More likely, you will see "prefork" (start N threads, each with its own
ring) or a worker pool (single thread receives events, then dispatches
to multiple threads for execution) or even one-thread-per-fd (single
thread receives events, then starts new thread for handling).
If you have multiple threads accessing the same ring -- a poor design
choice -- I would think the burden should be on the application, to
provide proper synchronization.
If the desire is to have the kernel distributes events directly to
multiple threads, then the app should dup(2) the fd to be watched, and
create a ring buffer for each separate thread.
Jeff
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists