[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4638C4EF.1030302@haxent.com.br>
Date: Wed, 02 May 2007 14:05:51 -0300
From: Davi Arnaut <davi@...ent.com.br>
To: Ulrich Drepper <drepper@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Davide Libenzi <davidel@...ilserver.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [patch 14/22] pollfs: pollable futex
Ulrich Drepper wrote:
> On 5/2/07, Davi Arnaut <davi@...ent.com.br> wrote:
>> It's quite easy to implement this scheme by write()ing the futexes all
>> at once but that would break the one futex per fd association. For
>> atomicity: if one of the futexes can't be queued, we would rollback
>> (unqueue) the others.
>>
>> Sounds sane?
>
> I don't know how you use "unqueue" in this context. If a queued futex
> is one which is /locked/ by te call, then yes, this is the semantics
> needed. Atomically locking a number of futexes means that if one of
> the set cannot be locked all operations done to lock the others have
> to be undone. It's an all-or-nothing situation.
The waits are queued, thus then can be "unqueued". It's quite simple to
extend futex_wait_queue() to support this, but again you are thinking of
locks while what I want is fast events.
> Locking is not as easy as you might think, though. For non-PI futexes
> there is deliberately no protocol in place describing what "locked"
> means. The locking operation has to be customizable. This is what
> the FUTEX_OP_* stuff is about.
Events are simple. A event is either signaled or not. A futex value 0 means
not signaled, 1+ signaled.
> And you wrote that currently each futex needs its own file descriptor.
> So this would have to be changed, too.
If it's really worth, I have no problem with it.
--
Davi Arnaut
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists