[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <52B345A3.6090700@sr71.net>
Date: Thu, 19 Dec 2013 11:14:43 -0800
From: Dave Hansen <dave@...1.net>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: Christoph Lameter <cl@...ux.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Pravin B Shelar <pshelar@...ira.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Andi Kleen <ak@...ux.intel.com>,
Pekka Enberg <penberg@...nel.org>
Subject: Re: [RFC][PATCH 0/7] re-shrink 'struct page' when SLUB is on.
On 12/18/2013 04:41 PM, Andrew Morton wrote:
> So your scary patch series which shrinks struct page while retaining
> the cmpxchg_double() might reclaim most of this loss?
Well, this is cool. Except for 1 case out of 14 (1024 bytes with the
alloc all / free all loops), my patched kernel either outperforms or
matches both of the existing cases.
To recap, we have two workloads, essentially the time to free an "old"
kmalloc which is not cache-warm (mode=0) and the time to free one which
is warm since it was just allocated (mode=1).
This is tried for 3 different kernel configurations:
1. The default today, SLUB with a 64-byte 'struct page' using cmpxchg16
2. Same kernel source as (1), but with SLUB's compile-time options
changed to disable CMPXCHG16 and not align 'struct page'
3. Patched kernel to internally align th SLUB data so that we can both
have an unaligned 56-byte 'struct page' and use the CMPXCHG16
optimization.
> https://docs.google.com/spreadsheet/ccc?key=0AgUCVXtr5IwedDNXb1FLNEFqVHdSNDF6YktYZTBndEE&usp=sharing
I'll respin the patches a bit and send out another version with some
small updates.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists