[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20160917013606.GM5016@twins.programming.kicks-ass.net>
Date: Sat, 17 Sep 2016 03:36:06 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Mel Gorman <mgorman@...hsingularity.net>,
LKML <linux-kernel@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>,
Dave Chinner <david@...morbit.com>,
Ying Huang <ying.huang@...el.com>,
Michal Hocko <mhocko@...nel.org>
Subject: Re: [PATCH 1/4] mm, vmscan: Batch removal of mappings under a single
lock during reclaim
On Fri, Sep 16, 2016 at 11:33:00AM -0700, Linus Torvalds wrote:
> On Fri, Sep 16, 2016 at 6:25 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> >
> > So, once upon a time, in a galaxy far away,.. I did a concurrent
> > pagecache patch set that replaced the tree_lock with a per page bit-
> > spinlock and fine grained locking in the radix tree.
>
> I'd love to see the patch for that. I'd be a bit worried about extra
> locking in the trivial cases (ie multi-level locking when we now take
> just the single mapping lock), but if there is some smart reason why
> that doesn't happen, then..
On average we'll likely take a few more locks, but its not as bad as
having to take the whole tree depth every time, or even touching the
root lock most times.
There's two cases, the first: the modification is only done on a single
node (like insert), here we do an RCU lookup of the node, lock it,
verify the node is still correct, do modification and unlock, done.
The second case, the modification needs to then back up the tree (like
setting/clearing tags, delete). For this case we can determine on our
way down where the first node is we need to modify, lock that, verify,
and then lock all nodes down to the last. i.e. we lock a partial path.
I can send you the 2.6.31 patches if you're interested, but if you want
something that applies to a kernel from this decade I'll have to go
rewrite them which will take a wee bit of time :-) Both the radix tree
code and the mm have changed somewhat.
Powered by blists - more mailing lists