lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 11 Mar 2021 10:49:23 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Mike Kravetz <mike.kravetz@...cle.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Michal Hocko <mhocko@...e.com>,
        "Paul E . McKenney" <paulmck@...nel.org>,
        Shakeel Butt <shakeelb@...gle.com>, tglx@...utronix.de,
        john.ogness@...utronix.de, urezki@...il.com, ast@...com,
        Eric Dumazet <edumazet@...gle.com>,
        Mina Almasry <almasrymina@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] hugetlb: select PREEMPT_COUNT if HUGETLB_PAGE for
 in_atomic use

On Wed, Mar 10, 2021 at 06:13:21PM -0800, Mike Kravetz wrote:
> put_page does not correctly handle all calling contexts for hugetlb
> pages.  This was recently discussed in the threads [1] and [2].
> 
> free_huge_page is the routine called for the final put_page of huegtlb
> pages.  Since at least the beginning of git history, free_huge_page has
> acquired the hugetlb_lock to move the page to a free list and possibly
> perform other processing. When this code was originally written, the
> hugetlb_lock should have been made irq safe.
> 
> For many years, nobody noticed this situation until lockdep code caught
> free_huge_page being called from irq context.  By this time, another
> lock (hugetlb subpool) was also taken in the free_huge_page path. 

AFAICT there's no actual problem with making spool->lock IRQ-safe too.

> In addition, hugetlb cgroup code had been added which could hold
> hugetlb_lock for a considerable period of time. 

cgroups, always bloody cgroups. The scheduler (and a fair number of
other places) get to deal with cgroups with IRQs disabled, so I'm sure
this can too.

> Because of this, commit
> c77c0a8ac4c5 ("mm/hugetlb: defer freeing of huge pages if in non-task
> context") was added to address the issue of free_huge_page being called
> from irq context.  That commit hands off free_huge_page processing to a
> workqueue if !in_task.
> 
> The !in_task check handles the case of being called from irq context.
> However, it does not take into account the case when called with irqs
> disabled as in [1].
> 
> To complicate matters, functionality has been added to hugetlb
> such that free_huge_page may block/sleep in certain situations.  The
> hugetlb_lock is of course dropped before potentially blocking.

AFAICT that's because CMA, right? That's only hstate_is_gigantic() and
free_gigantic_page() that has that particular trainwreck.

So you could move the workqueue there, and leave all the other hugetlb
sizes unaffected. Afaict if you limit the workqueue crud to
cma_clear_bitmap(), you don't get your..

> One way to handle all calling contexts is to have free_huge_page always
> send pages to the workqueue for processing.  This idea was briefly
> discussed here [3], but has some undesirable side effects.

... user visible side effects either.

> Ideally, the hugetlb_lock should have been irq safe from the beginning
> and any code added to the free_huge_page path should have taken this
> into account.  However, this has not happened.  The code today does have
> the ability to hand off requests to a workqueue.  It does this for calls
> from irq context.  Changing the check in the code from !in_task to
> in_atomic would handle the situations when called with irqs disabled.
> However, it does not not handle the case when called with a spinlock
> held.  This is needed because the code could block/sleep.

I'll argue the current workqueue thing is in the wrong place to begin
with.

So how about you make hugetlb_lock and spool->lock IRQ-safe, move thw
workqueue thingy into cma_release(), and then worry about optimizing the
cgroup crap?

Correctness first, performance second. Also, if you really care about
performance, not using cgroups is a very good option anyway.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ