lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZOLg2kmvKb4eGDrt@casper.infradead.org>
Date:   Mon, 21 Aug 2023 04:58:18 +0100
From:   Matthew Wilcox <willy@...radead.org>
To:     Mateusz Guzik <mjguzik@...il.com>
Cc:     torvalds@...ux-foundation.org, akpm@...ux-foundation.org,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH] mm: remove unintentional voluntary preemption in
 get_mmap_lock_carefully

On Mon, Aug 21, 2023 at 03:13:03AM +0200, Mateusz Guzik wrote:
> On Sun, Aug 20, 2023 at 07:12:16PM +0100, Matthew Wilcox wrote:
> > On Sun, Aug 20, 2023 at 12:43:03PM +0200, Mateusz Guzik wrote:
> > > Found by checking off-CPU time during kernel build (like so:
> > > "offcputime-bpfcc -Ku"), sample backtrace:
> > >     finish_task_switch.isra.0
> > >     __schedule
> > >     __cond_resched
> > >     lock_mm_and_find_vma
> > >     do_user_addr_fault
> > >     exc_page_fault
> > >     asm_exc_page_fault
> > >     -                sh (4502)
> > 
> > Now I'm awake, this backtrace really surprises me.  Do we not check
> > need_resched on entry?  It seems terribly unlikely that need_resched
> > gets set between entry and getting to this point, so I guess we must
> > not.
> > 
> > I suggest the version of the patch which puts might_sleep() before the
> > mmap_read_trylock() is the right one to apply.  It's basically what
> > we've done forever, except that now we'll be rescheduling without the
> > mmap lock held, which just seems like an overall win.
> > 
> 
> I can't sleep and your response made me curious, is that really safe
> here?
> 
> As I wrote in another email, the routine is concerned with a case of the
> kernel faulting on something it should not have. For a case like that I
> find rescheduling to another thread to be most concerning.

Hmm, initially I didn't see it, but you're concerned with something like:

        foo->bar = NULL;
        spin_lock(&foo->lock);
        printk("%d\n", foo->bar->baz);

And yeah, scheduling away in that case would be bad.

> That said I think I found a winner -- add need_resched() prior to
> trylock.
> 
> This adds less work than you would have added with might_sleep (a func
> call), still respects the preemption point, dodges exception table
> checks in the common case and does not switch away if the there is
> anything fishy going on.
> 
> Or just do that might_sleep.

The might_sleep() is clearly safe, but I thought of a different take on
the problem you've found, which is that we used to check need_resched
on _every_ page fault, because we used to take the mmap_lock on every
page fault.  Now we only check it on the minority of page faults which
can't be handled under the VMA lock.  But we can't just slam a
might_resched() into the start of the fault handler, because of the
problem you outlined above.

So how about something like this:

+++ b/arch/x86/mm/fault.c
@@ -1365,6 +1365,7 @@ void do_user_addr_fault(struct pt_regs *regs,
        if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED)))
                vma_end_read(vma);

+       might_resched();
        if (!(fault & VM_FAULT_RETRY)) {
                count_vm_vma_lock_event(VMA_LOCK_SUCCESS);
                goto done;

We found a VMA, so we know it isn't a NULL pointer dereference.  And we've
released the VMA lock at this point, so we won't be blocking anything from
making progress.  I'm not thrilled about having to replicate this in each
architecture, but I also don't love putting it in lock_vma_under_rcu()
(since someone might call that who actually can't schedule -- it certainly
wouldn't be obvious from the function name).

Then we can leave the might_sleep() exactly where it is in
get_mmap_lock_carefully(); it's really unlikely to trigger a reschedule.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ