[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1909260015140.1508@www.lameter.com>
Date: Thu, 26 Sep 2019 00:16:03 +0000 (UTC)
From: Christopher Lameter <cl@...ux.com>
To: Vlastimil Babka <vbabka@...e.cz>
cc: Andrew Morton <akpm@...ux-foundation.org>,
David Sterba <dsterba@...e.cz>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Ming Lei <ming.lei@...hat.com>,
Dave Chinner <david@...morbit.com>,
Matthew Wilcox <willy@...radead.org>,
"Darrick J . Wong" <darrick.wong@...cle.com>,
Christoph Hellwig <hch@....de>, linux-xfs@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-block@...r.kernel.org,
James Bottomley <James.Bottomley@...senPartnership.com>,
linux-btrfs@...r.kernel.org, Roman Gushchin <guro@...com>,
Johannes Weiner <hannes@...xchg.org>
Subject: Re: [PATCH v2 2/2] mm, sl[aou]b: guarantee natural alignment for
kmalloc(power-of-two)
On Wed, 25 Sep 2019, Vlastimil Babka wrote:
> Most of the new code is for SLOB, which has no debugging and yet
> misaligns. For SLUB and SLAB, it's just passing alignment argument to
> kmem_cache_create() for kmalloc caches, which means just extra few
> instructions during boot, and no extra code during kmalloc/kfree itself.
SLOB follows the standards for alignments in slab allocators and will
correctly align if you ask the allocator for a properly aligned object.
Powered by blists - more mailing lists