lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 25 May 2021 13:35:36 +0100
From:   Mel Gorman <mgorman@...hsingularity.net>
To:     Vlastimil Babka <vbabka@...e.cz>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Christoph Lameter <cl@...ux.com>,
        David Rientjes <rientjes@...gle.com>,
        Pekka Enberg <penberg@...nel.org>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        Thomas Gleixner <tglx@...utronix.de>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Jann Horn <jannh@...gle.com>
Subject: Re: [RFC 09/26] mm, slub: move disabling/enabling irqs to
 ___slab_alloc()

On Tue, May 25, 2021 at 01:39:29AM +0200, Vlastimil Babka wrote:
> Currently __slab_alloc() disables irqs around the whole ___slab_alloc().  This
> includes cases where this is not needed, such as when the allocation ends up in
> the page allocator and has to awkwardly enable irqs back based on gfp flags.
> Also the whole kmem_cache_alloc_bulk() is executed with irqs disabled even when
> it hits the __slab_alloc() slow path, and long periods with disabled interrupts
> are undesirable.
> 
> As a first step towards reducing irq disabled periods, move irq handling into
> ___slab_alloc(). Callers will instead prevent the s->cpu_slab percpu pointer
> from becoming invalid via migrate_disable(). This does not protect against
> access preemption, which is still done by disabled irq for most of
> ___slab_alloc(). As the small immediate benefit, slab_out_of_memory() call from
> ___slab_alloc() is now done with irqs enabled.
> 
> kmem_cache_alloc_bulk() disables irqs for its fastpath and then re-enables them
> before calling ___slab_alloc(), which then disables them at its discretion. The
> whole kmem_cache_alloc_bulk() operation also disables cpu migration.
> 
> When  ___slab_alloc() calls new_slab() to allocate a new page, re-enable
> preemption, because new_slab() will re-enable interrupts in contexts that allow
> blocking.
> 
> The patch itself will thus increase overhead a bit due to disabled migration
> and increased disabling/enabling irqs in kmem_cache_alloc_bulk(), but that will
> be gradually improved in the following patches.
> 
> Signed-off-by: Vlastimil Babka <vbabka@...e.cz>

Why did you use migrate_disable instead of preempt_disable? There is a
fairly large comment in include/linux/preempt.h on why migrate_disable
is undesirable so new users are likely to be put under the microscope
once Thomas or Peter notice it.

I think you are using it so that an allocation request can be preempted by
a higher priority task but given that the code was disabling interrupts,
there was already some preemption latency. However, migrate_disable
is more expensive than preempt_disable (function call versus a simple
increment). On that basis, I'd recommend starting with preempt_disable
and only using migrate_disable if necessary.

Bonus points for adding a comment where ___slab_alloc disables IRQs to
clarify what is protected -- I assume it's protecting kmem_cache_cpu
from being modified from interrupt context. If so, it's potentially a
local_lock candidate.

-- 
Mel Gorman
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ