[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20131216160128.aa1f1eb8039f5eee578cf560@linux-foundation.org>
Date: Mon, 16 Dec 2013 16:01:28 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Dave Hansen <dave@...1.net>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Pravin B Shelar <pshelar@...ira.com>,
Christoph Lameter <cl@...ux-foundation.org>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Andi Kleen <ak@...ux.intel.com>,
Pekka Enberg <penberg@...nel.org>
Subject: Re: [RFC][PATCH 0/7] re-shrink 'struct page' when SLUB is on.
On Fri, 13 Dec 2013 15:59:03 -0800 Dave Hansen <dave@...1.net> wrote:
> SLUB depends on a 16-byte cmpxchg for an optimization. For the
> purposes of this series, I'm assuming that it is a very important
> optimization that we desperately need to keep around.
What if we don't do that.
> In order to get guaranteed 16-byte alignment (required by the
> hardware on x86), 'struct page' is padded out from 56 to 64
> bytes.
>
> Those 8-bytes matter. We've gone to great lengths to keep
> 'struct page' small in the past. It's a shame that we bloat it
> now just for alignment reasons when we have extra space. Plus,
> bloating such a commonly-touched structure *HAS* to have cache
> footprint implications.
>
> These patches attempt _internal_ alignment instead of external
> alignment for slub.
>
> I also got a bug report from some folks running a large database
> benchmark. Their old kernel uses slab and their new one uses
> slub. They were swapping and couldn't figure out why. It turned
> out to be the 2GB of RAM that the slub padding wastes on their
> system.
>
> On my box, that 2GB cost about $200 to populate back when we
> bought it. I want my $200 back.
>
> This set takes me from 16909584K of reserved memory at boot
> down to 14814472K, so almost *exactly* 2GB of savings! It also
> helps performance, presumably because it touches 14% fewer
> struct page cachelines. A 30GB dd to a ramfs file:
>
> dd if=/dev/zero of=bigfile bs=$((1<<30)) count=30
>
> is sped up by about 4.4% in my testing.
This is a gruesome and horrible tale of inefficiency and regression.
>From 5-10 minutes of gitting I couldn't see any performance testing
results for slub's cmpxchg_double stuff. I am thinking we should just
tip it all overboard unless someone can demonstrate sufficiently
serious losses from so doing.
--- a/arch/x86/Kconfig~a
+++ a/arch/x86/Kconfig
@@ -78,7 +78,6 @@ config X86
select ANON_INODES
select HAVE_ALIGNED_STRUCT_PAGE if SLUB
select HAVE_CMPXCHG_LOCAL
- select HAVE_CMPXCHG_DOUBLE
select HAVE_ARCH_KMEMCHECK
select HAVE_USER_RETURN_NOTIFIER
select ARCH_BINFMT_ELF_RANDOMIZE_PIE
_
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists