[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130930164141.GH3081@twins.programming.kicks-ass.net>
Date: Mon, 30 Sep 2013 18:41:41 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Ingo Molnar <mingo@...nel.org>, Waiman Long <Waiman.Long@...com>,
Ingo Molnar <mingo@...e.hu>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Rik van Riel <riel@...hat.com>,
Peter Hurley <peter@...leysoftware.com>,
Davidlohr Bueso <davidlohr.bueso@...com>,
Alex Shi <alex.shi@...el.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Matthew R Wilcox <matthew.r.wilcox@...el.com>,
Dave Hansen <dave.hansen@...el.com>,
Michel Lespinasse <walken@...gle.com>,
Andi Kleen <andi@...stfloor.org>,
"Chandramouleeswaran, Aswin" <aswin@...com>,
"Norton, Scott J" <scott.norton@...com>
Subject: Re: [PATCH] rwsem: reduce spinlock contention in wakeup code path
On Mon, Sep 30, 2013 at 09:13:52AM -0700, Linus Torvalds wrote:
> So unlike a lot of other "let's try to make our locking fancy" that I
> dislike because it tends to hide the fundamental problem of
> contention, the rwlock patches make me go "those actually _fix_ a
> fundamental problem".
So here I'm slightly disagreeing; fixing a fundamental problem would be
coming up a better anon_vma management that doesn't create such immense
chains.
Its still the same lock, spinlock or not.
And regardless of if we keep anon_vma lock a rwsem or not; I think we
should merge those rwsem patches as they do improve the lock
implementation and the hard work has already been done.
However the biggest ugly by far here is that mm_take_all_locks() thing;
couldn't we implement that by basically freezing all tasks referencing
that mm?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists