[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161222050130.49d93982@roar.ozlabs.ibm.com>
Date: Thu, 22 Dec 2016 05:01:30 +1000
From: Nicholas Piggin <npiggin@...il.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Bob Peterson <rpeterso@...hat.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Steven Whitehouse <swhiteho@...hat.com>,
Andrew Lutomirski <luto@...nel.org>,
Andreas Gruenbacher <agruenba@...hat.com>,
Mel Gorman <mgorman@...hsingularity.net>,
linux-mm <linux-mm@...ck.org>, Hugh Dickins <hughd@...gle.com>
Subject: Re: [RFC][PATCH] make global bitlock waitqueues per-node
On Thu, 22 Dec 2016 04:33:31 +1000
Nicholas Piggin <npiggin@...il.com> wrote:
> On Wed, 21 Dec 2016 10:02:27 -0800
> Linus Torvalds <torvalds@...ux-foundation.org> wrote:
>
> > I do think your approach of just re-using the existing bit waiting
> > with just a page-specific waiting function is nicer than Nick's "let's
> > just roll new waiting functions" approach. It also avoids the extra
> > initcall.
> >
> > Nick, comments?
>
> Well yes we should take my patch 1 and use the new bit for this
> purpose regardless of what way we go with patch 2. I'll reply to
> that in the other mail.
Actually when I hit send, I thought your next mail was addressing a
different subject. So back here.
Peter's patch is less code and in that regard a bit nicer. I tried
going that way once, but I just thought it was a bit too sloppy to
do nicely with wait bit APIs.
- The page can be added to waitqueue without PageWaiters being set.
This is transient condition where the lock is retested, but it
remains that PageWaiters is not quite the same as waitqueue_active
to some degree.
- This set + retest means every time a page gets a waiter, the cost
is 2 test-and-set for the lock bit plus 2 spin_lock+spin_unlock for
the waitqueue.
- Setting PageWaiters is done outside the waitqueue lock, so you also
have a new interleavings to think about versus clearing the bit.
- It fails to clear up the bit and return to fastpath when there are
hash collisions. Yes I know this is a rare case and on average it
probably does not matter. But jitter is important, but also we
really *want* to keep the waitqueue table small and lean like you
have made it if possible. None of this 100KB per zone crap -- I do
want to keep it small and tolerating collisions better would help
that.
Anyway that's about my 2c. Keep in mind Mel just said he might have
seen a lockup with Peter's patch, and mine has not been hugely tested
either, so let's wait for a bit more testing before merging either.
Although we could start pipelining the process by merging patch 1 if
Hugh acks it (cc'ed), then I'll resend with SOB and Ack.
Thanks,
Nick
Powered by blists - more mailing lists