[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87vdho7kzn.fsf@basil.nowhere.org>
Date: Thu, 05 Nov 2009 21:52:12 +0100
From: Andi Kleen <andi@...stfloor.org>
To: Christoph Lameter <cl@...ux-foundation.org>
Cc: npiggin@...e.de, linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Tejun Heo <tj@...nel.org>, Ingo Molnar <mingo@...e.hu>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
"hugh.dickins@...cali.co.uk" <hugh.dickins@...cali.co.uk>
Subject: Re: [RFC MM] Accessors for mm locking
Christoph Lameter <cl@...ux-foundation.org> writes:
> From: Christoph Lameter <cl@...ux-foundation.org>
> Subject: [RFC MM] Accessors for mm locking
>
> Scaling of MM locking has been a concern for a long time. With the arrival of
> high thread counts in average business systems we may finally have to do
> something about that.
Thanks for starting to think about that. Yes, this is definitely
something that needs to be addressed.
> Index: linux-2.6/arch/x86/mm/fault.c
> ===================================================================
> --- linux-2.6.orig/arch/x86/mm/fault.c 2009-11-05 13:02:35.000000000 -0600
> +++ linux-2.6/arch/x86/mm/fault.c 2009-11-05 13:02:41.000000000 -0600
> @@ -758,7 +758,7 @@ __bad_area(struct pt_regs *regs, unsigne
> * Something tried to access memory that isn't in our memory map..
> * Fix it, but check if it's kernel or user first..
> */
> - up_read(&mm->mmap_sem);
> + mm_reader_unlock(mm);
My assumption was that a suitable scalable lock (or rather multi locks)
would need to know about the virtual address, or at least the VMA.
As in doing range locking for different address space areas.
So this simple abstraction doesn't seem to be enough to really experiment?
Or what did you have in mind for improving the locking without using
ranges?
-Andi
--
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists