[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <00000142e7ea519d-8906d225-c99c-44b5-b381-b573c75fd097-000000@email.amazonses.com>
Date: Thu, 12 Dec 2013 17:46:02 +0000
From: Christoph Lameter <cl@...ux.com>
To: Dave Hansen <dave@...1.net>
cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
kirill.shutemov@...ux.intel.com, Andi Kleen <ak@...ux.intel.com>
Subject: Re: [RFC][PATCH 2/3] mm: slab: move around slab ->freelist for
cmpxchg
On Wed, 11 Dec 2013, Dave Hansen wrote:
>
> The write-argument to cmpxchg_double() must be 16-byte aligned.
> We used to align 'struct page' itself in order to guarantee this,
> but that wastes 8-bytes per page. Instead, we take 8-bytes
> internal to the page before page->counters and move freelist
> between there and the existing 8-bytes after counters. That way,
> no matter how 'stuct page' itself is aligned, we can ensure that
> we have a 16-byte area with which to to this cmpxchg.
Well this adds additional branching to the fast paths.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists