[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20120816122023.c0e9bbc0.akpm@linux-foundation.org>
Date: Thu, 16 Aug 2012 12:20:23 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Cc: Andrea Arcangeli <aarcange@...hat.com>, linux-mm@...ck.org,
Andi Kleen <ak@...ux.intel.com>,
"H. Peter Anvin" <hpa@...ux.intel.com>,
linux-kernel@...r.kernel.org,
"Kirill A. Shutemov" <kirill@...temov.name>
Subject: Re: [PATCH, RFC 0/9] Introduce huge zero page
On Thu, 9 Aug 2012 12:08:11 +0300
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com> wrote:
> During testing I noticed big (up to 2.5 times) memory consumption overhead
> on some workloads (e.g. ft.A from NPB) if THP is enabled.
>
> The main reason for that big difference is lacking zero page in THP case.
> We have to allocate a real page on read page fault.
>
> A program to demonstrate the issue:
> #include <assert.h>
> #include <stdlib.h>
> #include <unistd.h>
>
> #define MB 1024*1024
>
> int main(int argc, char **argv)
> {
> char *p;
> int i;
>
> posix_memalign((void **)&p, 2 * MB, 200 * MB);
> for (i = 0; i < 200 * MB; i+= 4096)
> assert(p[i] == 0);
> pause();
> return 0;
> }
>
> With thp-never RSS is about 400k, but with thp-always it's 200M.
> After the patcheset thp-always RSS is 400k too.
That's a pretty big improvement for a rather fake test case. I wonder
how much benefit we'd see with real workloads?
Things are rather quiet at present, with summer and beaches and Kernel
Summit coming up. Please resend these patches early next month and
let's see if we can get a bit of action happening?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists