lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpG4BrbXUypWsEwaSC8UZbciD-KytPGsoF802W6f4R9QTQ@mail.gmail.com>
Date:   Mon, 18 Sep 2023 14:20:21 -0700
From:   Suren Baghdasaryan <surenb@...gle.com>
To:     Hugh Dickins <hughd@...gle.com>
Cc:     Matthew Wilcox <willy@...radead.org>,
        Yang Shi <shy828301@...il.com>, Michal Hocko <mhocko@...e.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        syzbot <syzbot+b591856e0f0139f83023@...kaller.appspotmail.com>,
        akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] [mm?] kernel BUG in vma_replace_policy

On Fri, Sep 15, 2023 at 7:44 PM Hugh Dickins <hughd@...gle.com> wrote:
>
> On Fri, 15 Sep 2023, Suren Baghdasaryan wrote:
> > On Fri, Sep 15, 2023 at 9:09 AM Suren Baghdasaryan <surenb@...gle.com> wrote:
> > >
> > > Thanks for the feedback, Hugh!
> > > Yeah, this positive err handling is kinda weird. If this behavior (do
> > > as much as possible even if we fail eventually) is specific to mbind()
> > > then we could keep walk_page_range() as is and lock the VMAs inside
> > > the loop that calls mbind_range() with a condition that ret is
> > > positive. That would be the simplest solution IMHO. But if we expect
> > > walk_page_range() to always apply requested page_walk_lock policy to
> > > all VMAs even if some mm_walk_ops returns a positive error somewhere
> > > in the middle of the walk then my fix would work for that. So, to me
> > > the important question is how we want walk_page_range() to behave in
> > > these conditions. I think we should answer that first and document
> > > that. Then the fix will be easy.
> >
> > I looked at all the cases where we perform page walk while locking
> > VMAs and mbind() seems to be the only one that would require
> > walk_page_range() to lock all VMAs even for a failed walk.
>
> Yes, I can well believe that.
>
> > So, I suggest this fix instead and I can also document that if
> > walk_page_range() fails it might not apply page_walk_lock policy to
> > the VMAs.
> >
> > diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> > index 42b5567e3773..cbc584e9b6ca 100644
> > --- a/mm/mempolicy.c
> > +++ b/mm/mempolicy.c
> > @@ -1342,6 +1342,9 @@ static long do_mbind(unsigned long start,
> > unsigned long len,
> >          vma_iter_init(&vmi, mm, start);
> >          prev = vma_prev(&vmi);
> >          for_each_vma_range(vmi, vma, end) {
> > +                /* If queue_pages_range failed then not all VMAs
> > might be locked */
> > +                if (ret)
> > +                        vma_start_write(vma);
> >                  err = mbind_range(&vmi, vma, &prev, start, end, new);
> >                  if (err)
> >                          break;
> >
> > If this looks good I'll post the patch. Matthew, Hugh, anyone else?
>
> Yes, I do prefer this, to adding those pos ret mods into the generic
> pagewalk.  The "if (ret)" above being just a minor optimization, that
> I would probably not have bothered with (does it even save any atomics?)
> - but I guess it helps as documentation.
>
> I think it's quite likely that mbind() will be changed sooner or later
> not to need this; but it's much the best to fix this vma locking issue
> urgently as above, without depending on any mbind() behavioral discussions.

I posted this patch at
https://lore.kernel.org/all/20230918211608.3580629-1-surenb@google.com/
to fix the immediate problem.
Thanks!

>
> Thanks,
> Hugh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ