[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <461647F5.5040206@yahoo.com.au>
Date: Fri, 06 Apr 2007 23:15:33 +1000
From: Nick Piggin <nickpiggin@...oo.com.au>
To: Hugh Dickins <hugh@...itas.com>
CC: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Eric Dumazet <dada1@...mosbay.com>,
Ulrich Drepper <drepper@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Dave Jones <davej@...hat.com>, Ingo Molnar <mingo@...e.hu>,
Andi Kleen <ak@...e.de>,
Ravikiran G Thirumalai <kiran@...lex86.org>,
"Shai Fultheim (Shai@...lex86.org)" <shai@...lex86.org>,
pravin b shelar <pravin.shelar@...softinc.com>,
linux-kernel@...r.kernel.org,
"Pierre.Peiffer" <Pierre.Peiffer@...l.net>
Subject: Re: Shared futexes (was [PATCH] FUTEX : new PRIVATE futexes)
Hugh Dickins wrote:
> On Fri, 6 Apr 2007, Peter Zijlstra wrote:
>
>>some thoughts on shared futexes;
>>
>>Could we get rid of the mmap_sem on the shared futexes in the following
>>manner:
I'd imagine shared futexes would be much less common than private for
threaded programs... I'd say we should reevaluate things once we have
private futexes, and malloc/free stop hammering mmap_sem so hard...
>> - get a page using pfn_to_page (skipping VM_PFNMAP)
>> - get the futex key from page->mapping->host and page->index
>> and offset from addr % PAGE_SIZE.
>>
>>or given a key:
>>
>> - lookup the page from key.shared.inode->i_mapping by key.shared.pgoff
>> possibly loading the page using mapping->a_ops->readpage().
For shared futexes, wouldn't i_mapping be worse, because you'd be
ping-ponging the tree_lock between processes, rather than have each
use their own mmap_sem?
That also only helps for the wakeup case too, doesn't it? You have
to use the vmas to find out which inode to use to do the wait, I think?
(unless you introduce a new shared futex API).
--
SUSE Labs, Novell Inc.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists