[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4AEB12F0.9090006@us.ibm.com>
Date: Fri, 30 Oct 2009 09:23:12 -0700
From: Darren Hart <dvhltc@...ibm.com>
To: Arnd Bergmann <arnd@...db.de>
CC: "lkml, " <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...e.hu>,
Eric Dumazet <eric.dumazet@...il.com>,
Dinakar Guniguntala <dino@...ibm.com>,
"Stultz, John" <johnstul@...ibm.com>
Subject: Re: [PATCH] RFC: futex: make futex_lock_pi interruptible
Arnd Bergmann wrote:
> On Friday 30 October 2009, Darren Hart wrote:
>> Darren Hart wrote:
>> This appears to work fine. Can anyone think of a reason why this is an unsafe
>> thing to do? I'll have to create a much more elaborate test case and review
>> the glibc code of course to make sure the glibc mutex state isn't compromised.
>
> The only reason I can see against it is the need to use one of the
> rt signal numbers from library code, which may conflict with other
> users of the signal. Being able to avoid a signal altogether would
> be really nice, as in the futex_cancel extension you mentioned.
For the reason you mention, consumption of a signal number, the
futex_cancel extension was how I originally set out to tackle this.
However, Thomas and Peter both seemed to feel that the signal approach
was a more standard way of interrupting a unix system call. One trick to
the futex_cancel approach will be identifying which thread to cancel -
since no other futex operation is thread specific. I suspect just
overloading one of the argument to pass a TID would address that nicely.
This would allow us to return ECANCELED from the kernel, which I think
is a much more direct implementation.
Peter and Thomas, could you comment on why the signal approach might be
preferred over the futex_cancel extension?
>> /* Need some kind of per-thread variable here */
>> jmp_buf env;
>> pthread_mutex_t mutex;
>
> Maybe instead of per-thread variables (which should work
> fine), you could do
>
> typedef struct {
> jmp_buf env;
> pthread_mutex_t mutex;
> } interruptible_mutex_t;
I don't quite follow. There will be a 1:many relationship between
mutex:threads, but there should be a 1:1 relationship between
threads:env. Since multiple threads can block on one mutex, the above
struct wouldn't provide the necessary number of env to set the jmp point
for each one.... am I misunderstanding your suggestion?
>> /* ensure the child has blocked on the lock */
>> sleep(1);
>
> In a real application, you might want to add some logic to avoid
> this kind of race. For the test case, you probably need to do it
> with the sleep.
This would likely need to be handled within glibc, just as it manages
the sequence counters for the condvars to deal with wake-up races.
--
Darren Hart
IBM Linux Technology Center
Real-Time Linux Team
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists