lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170103102439.4fienez2fkgqwbrd@techsingularity.net>
Date:   Tue, 3 Jan 2017 10:24:39 +0000
From:   Mel Gorman <mgorman@...hsingularity.net>
To:     Nicholas Piggin <npiggin@...il.com>
Cc:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Bob Peterson <rpeterso@...hat.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Steven Whitehouse <swhiteho@...hat.com>,
        Andrew Lutomirski <luto@...nel.org>,
        Andreas Gruenbacher <agruenba@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        linux-mm <linux-mm@...ck.org>
Subject: Re: [PATCH 2/2] mm: add PageWaiters indicating tasks are waiting for
 a page bit

On Thu, Dec 29, 2016 at 03:26:15PM +1000, Nicholas Piggin wrote:
> > And I fixed that too.
> > 
> > Of course, I didn't test the changes (apart from building it). But
> > I've been running the previous version since yesterday, so far no
> > issues.
> 
> It looks good to me.
> 

FWIW, I blindly queued a test of Nick's patch, Linus' patch on top and
PeterZ's patch using 4.9 as a baseline so all could be applied cleanly.
3 machines were used, one one of them NUMA with 2 sockets. The UMA
machines showed nothing unusual.

kernel building showed nothing unusual on any machine

git checkout in a loop showed;
	o minor gains with Nick's patch
	o no impact from Linus's patch
	o flat performance from PeterZ's

git test suite showed
	o close to flat performance on all patches
	o Linus' patch on top showed increased variability but not serious

will-it-scale pagefault tests
	o page_fault1 and page_fault2 showed no differences in processes

	o page_fault3 using processes did show some large losses at some
	  process counts on all patches. The losses were not consistent on
	  each run. There also was no consistently at loss with increasing
	  process counts. It did appear that Peter's patch had fewer
	  problems with only one thread count showing problems so it
	  *may* be more resistent to the problem but not completely and
	  it's not obvious why it might be so it could be a testing
	  anomaly

	o page_fault3 using threads didn't show anything unusual. It's
	  possible that any problem with the waitqueue lookups is hidden
	  by mmap_sem

I think I can see something similar to Dave but not consistently and not as
severe and only using processes for page_fault3. Linus's patch appears to
help a little but not eliminate the problem. Given the machine only had 2
sockets, it's prefectly possible that Dave can see a consistent problem that
I cannot. During the test run, I hadn't collected the profiles to see what
is going on as the test queueing was a drive-by bit of work while on holiday.

Reading both Nick's (which is already merged so somewhat moot) and
PeterZ's patch, I did find Nick's easier to understand with some minor
gripes about naming. 

None of the patches showed the same lost wakeup I'd seen once on earlier
prototypes.

-- 
Mel Gorman
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ