lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 20 Mar 2019 09:48:47 +0100
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Christopher Lameter <cl@...ux.com>
Cc:     linux-mm@...ck.org, Pekka Enberg <penberg@...nel.org>,
        David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Ming Lei <ming.lei@...hat.com>,
        Dave Chinner <david@...morbit.com>,
        Matthew Wilcox <willy@...radead.org>,
        "Darrick J . Wong" <darrick.wong@...cle.com>,
        Christoph Hellwig <hch@....de>,
        Michal Hocko <mhocko@...nel.org>, linux-kernel@...r.kernel.org,
        linux-xfs@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-block@...r.kernel.org
Subject: Re: [RFC 0/2] guarantee natural alignment for kmalloc()

On 3/20/19 1:43 AM, Christopher Lameter wrote:
> On Tue, 19 Mar 2019, Vlastimil Babka wrote:
> 
>> The recent thread [1] inspired me to look into guaranteeing alignment for
>> kmalloc() for power-of-two sizes. Turns out it's not difficult and in most
>> configuration nothing really changes as it happens implicitly. More details in
>> the first patch. If we agree we want to do this, I will see where to update
>> documentation and perhaps if there are any workarounds in the tree that can be
>> converted to plain kmalloc() afterwards.
> 
> This means that the alignments are no longer uniform for all kmalloc
> caches and we get back to code making all sorts of assumptions about
> kmalloc alignments.

Natural alignment to size is rather well defined, no? Would anyone ever
assume a larger one, for what reason?
It's now where some make assumptions (even unknowingly) for natural
There are two 'odd' sizes 96 and 192, which will keep cacheline size
alignment, would anyone really expect more than 64 bytes?

> Currently all kmalloc objects are aligned to KMALLOC_MIN_ALIGN. That will
> no longer be the case and alignments will become inconsistent.

KMALLOC_MIN_ALIGN is still the minimum, but in practice it's larger
which is not a problem.

Also let me stress again that nothing really changes except for SLOB,
and SLUB with debug options. The natural alignment for power-of-two
sizes already happens as SLAB and SLUB both allocate objects starting on
the page boundary. So people make assumptions based on that, and then
break with SLOB, or SLUB with debug. This patch just prevents that
breakage by guaranteeing those natural assumptions at all times.

> I think its valuable that alignment requirements need to be explicitly
> requested.

That's still possible for named caches created by kmem_cache_create().

> Lets add an array of power of two aligned kmalloc caches if that is really
> necessary. Add some GFP_XXX flag to kmalloc to make it ^2 aligned maybe?

That's unnecessary and wasteful, as the existing caches are already
aligned in the common configurations. Requiring a flag doesn't help with
the implicit assumptions going wrong. I really don't think it needs to
get more complicated than adjusting the uncommon configuration, as this
patch does.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ