lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 25 Oct 2012 23:49:59 +0300
From:	"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	"Kirill A. Shutemov" <kirill@...temov.name>,
	Andrea Arcangeli <aarcange@...hat.com>, linux-mm@...ck.org,
	Andi Kleen <ak@...ux.intel.com>,
	"H. Peter Anvin" <hpa@...ux.intel.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 10/10] thp: implement refcounting for huge zero page

On Wed, Oct 24, 2012 at 01:25:52PM -0700, Andrew Morton wrote:
> On Wed, 24 Oct 2012 22:45:52 +0300
> "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com> wrote:
> 
> > On Wed, Oct 24, 2012 at 12:22:53PM -0700, Andrew Morton wrote:
> > > 
> > > I'm thinking that such a workload would be the above dd in parallel
> > > with a small app which touches the huge page and then exits, then gets
> > > executed again.  That "small app" sounds realistic to me.  Obviously
> > > one could exercise the zero page's refcount at higher frequency with a
> > > tight map/touch/unmap loop, but that sounds less realistic.  It's worth
> > > trying that exercise as well though.
> > > 
> > > Or do something else.  But we should try to probe this code's
> > > worst-case behaviour, get an understanding of its effects and then
> > > decide whether any such workload is realisic enough to worry about.
> > 
> > Okay, I'll try few memory pressure scenarios.

A test program:

        while (1) {
                posix_memalign((void **)&p, 2 * MB, 2 * MB);
                assert(*p == 0);
                free(p);
        }

With this code in background we have pretty good chance to have huge zero
page freeable (refcount == 1) when shrinker callback called - roughly one
of two.

Pagecache hog (dd if=hugefile of=/dev/null bs=1M) creates enough pressure
to get shrinker callback called, but it was only asked about cache size
(nr_to_scan == 0).
I was not able to get it called with nr_to_scan > 0 on this scenario, so
hzp never freed.

I also tried another scenario: usemem -n16 100M -r 1000. It creates real
memory pressure - no easy reclaimable memory. This time callback called
with nr_to_scan > 0 and we freed hzp. Under pressure we fails to allocate
hzp and code goes to fallback path as it supposed to.

Do I need to check any other scenario?

-- 
 Kirill A. Shutemov

Download attachment "signature.asc" of type "application/pgp-signature" (837 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ