lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZOJXgFJybD1ljCHL@casper.infradead.org>
Date:   Sun, 20 Aug 2023 19:12:16 +0100
From:   Matthew Wilcox <willy@...radead.org>
To:     Mateusz Guzik <mjguzik@...il.com>
Cc:     torvalds@...ux-foundation.org, akpm@...ux-foundation.org,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH] mm: remove unintentional voluntary preemption in
 get_mmap_lock_carefully

On Sun, Aug 20, 2023 at 12:43:03PM +0200, Mateusz Guzik wrote:
> Found by checking off-CPU time during kernel build (like so:
> "offcputime-bpfcc -Ku"), sample backtrace:
>     finish_task_switch.isra.0
>     __schedule
>     __cond_resched
>     lock_mm_and_find_vma
>     do_user_addr_fault
>     exc_page_fault
>     asm_exc_page_fault
>     -                sh (4502)

Now I'm awake, this backtrace really surprises me.  Do we not check
need_resched on entry?  It seems terribly unlikely that need_resched
gets set between entry and getting to this point, so I guess we must
not.

I suggest the version of the patch which puts might_sleep() before the
mmap_read_trylock() is the right one to apply.  It's basically what
we've done forever, except that now we'll be rescheduling without the
mmap lock held, which just seems like an overall win.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ