[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170103222958.4a2ce0e6@roar.ozlabs.ibm.com>
Date: Tue, 3 Jan 2017 22:29:58 +1000
From: Nicholas Piggin <npiggin@...il.com>
To: Mel Gorman <mgorman@...hsingularity.net>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Bob Peterson <rpeterso@...hat.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Steven Whitehouse <swhiteho@...hat.com>,
Andrew Lutomirski <luto@...nel.org>,
Andreas Gruenbacher <agruenba@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
linux-mm <linux-mm@...ck.org>
Subject: Re: [PATCH 2/2] mm: add PageWaiters indicating tasks are waiting
for a page bit
On Tue, 3 Jan 2017 10:24:39 +0000
Mel Gorman <mgorman@...hsingularity.net> wrote:
> On Thu, Dec 29, 2016 at 03:26:15PM +1000, Nicholas Piggin wrote:
> > > And I fixed that too.
> > >
> > > Of course, I didn't test the changes (apart from building it). But
> > > I've been running the previous version since yesterday, so far no
> > > issues.
> >
> > It looks good to me.
> >
>
> FWIW, I blindly queued a test of Nick's patch, Linus' patch on top and
> PeterZ's patch using 4.9 as a baseline so all could be applied cleanly.
> 3 machines were used, one one of them NUMA with 2 sockets. The UMA
> machines showed nothing unusual.
Hey thanks Mel.
>
> kernel building showed nothing unusual on any machine
>
> git checkout in a loop showed;
> o minor gains with Nick's patch
> o no impact from Linus's patch
> o flat performance from PeterZ's
>
> git test suite showed
> o close to flat performance on all patches
> o Linus' patch on top showed increased variability but not serious
I'd be really surprised if Linus's patch is actually adding variability
unless it is just some random cache or branch predictor or similar change
due to changed code sizes. Testing on skylake CPU showed the old sequence
takes a big stall with the load-after-lock;op hazard.
So I wouldn't worry about it too much, but maybe something interesting to
look at for someone who knows x86 microarchitectures well.
>
> will-it-scale pagefault tests
> o page_fault1 and page_fault2 showed no differences in processes
>
> o page_fault3 using processes did show some large losses at some
> process counts on all patches. The losses were not consistent on
> each run. There also was no consistently at loss with increasing
> process counts. It did appear that Peter's patch had fewer
> problems with only one thread count showing problems so it
> *may* be more resistent to the problem but not completely and
> it's not obvious why it might be so it could be a testing
> anomaly
Okay. page_fault3 has each process doing repeated page faults on their
own 128MB file in /tmp. Unless they fill memory and start to reclaim,
(which I believe must be happening in Dave's case) there should be no
contention on page lock. After the patch, the uncontended case should
be strictly faster when there is no contention.
When there is contention, there is an added cost of setting and clearing
page waiters bit. Maybe there is some other issue there... are you seeing
the losses in uncontended case, contended, or both?
Thanks,
Nick
Powered by blists - more mailing lists