[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87r5sc7kst.fsf@basil.nowhere.org>
Date: Thu, 05 Nov 2009 21:56:18 +0100
From: Andi Kleen <andi@...stfloor.org>
To: Christoph Lameter <cl@...ux-foundation.org>
Cc: npiggin@...e.de, linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Tejun Heo <tj@...nel.org>, Ingo Molnar <mingo@...e.hu>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
"hugh.dickins@...cali.co.uk" <hugh.dickins@...cali.co.uk>
Subject: Re: Subject: [RFC MM] mmap_sem scaling: Use mutex and percpu counter instead
Christoph Lameter <cl@...ux-foundation.org> writes:
> Instead of a rw semaphore use a mutex and a per cpu counter for the number
> of the current readers. read locking then becomes very cheap requiring only
> the increment of a per cpu counter.
>
> Write locking is more expensive since the writer must scan the percpu array
> and wait until all readers are complete. Since the readers are not holding
> semaphores we have no wait queue from which the writer could wakeup. In this
> draft we simply wait for one millisecond between scans of the percpu
> array. A different solution must be found there.
I'm not sure making all writers more expensive is really a good idea.
For example it will definitely impact the AIM7 multi brk() issue
or the mysql allocation case, which are all writer intensive. I assume
doing a lot of mmaps/brks in parallel is not that uncommon.
My thinking was more that we simply need per VMA locking or
some other per larger address range locking. Unfortunately that
needs changes in a lot of users that mess with the VMA lists
(perhaps really needs some better abstractions for VMA list management
first)
That said also addressing the convoying issues in the current
semaphores would be a good idea, which is what your patch does.
-Andi
--
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists