lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpGp2CwGJmmwzK7WdudOyL1CCWVaERRK9qTtNA8SZ365SA@mail.gmail.com>
Date:   Thu, 14 Sep 2023 20:53:59 +0000
From:   Suren Baghdasaryan <surenb@...gle.com>
To:     Matthew Wilcox <willy@...radead.org>
Cc:     syzbot <syzbot+b591856e0f0139f83023@...kaller.appspotmail.com>,
        akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] [mm?] kernel BUG in vma_replace_policy

On Thu, Sep 14, 2023 at 8:00 PM Suren Baghdasaryan <surenb@...gle.com> wrote:
>
> On Thu, Sep 14, 2023 at 7:09 PM Matthew Wilcox <willy@...radead.org> wrote:
> >
> > On Thu, Sep 14, 2023 at 06:20:56PM +0000, Suren Baghdasaryan wrote:
> > > I think I found the problem and the explanation is much simpler. While
> > > walking the page range, queue_folios_pte_range() encounters an
> > > unmovable page and queue_folios_pte_range() returns 1. That causes a
> > > break from the loop inside walk_page_range() and no more VMAs get
> > > locked. After that the loop calling mbind_range() walks over all VMAs,
> > > even the ones which were skipped by queue_folios_pte_range() and that
> > > causes this BUG assertion.
> > >
> > > Thinking what's the right way to handle this situation (what's the
> > > expected behavior here)...
> > > I think the safest way would be to modify walk_page_range() and make
> > > it continue calling process_vma_walk_lock() for all VMAs in the range
> > > even when __walk_page_range() returns a positive err. Any objection or
> > > alternative suggestions?
> >
> > So we only return 1 here if MPOL_MF_MOVE* & MPOL_MF_STRICT were
> > specified.  That means we're going to return an error, no matter what,
> > and there's no point in calling mbind_range().  Right?
> >
> > +++ b/mm/mempolicy.c
> > @@ -1334,6 +1334,8 @@ static long do_mbind(unsigned long start, unsigned long len,
> >         ret = queue_pages_range(mm, start, end, nmask,
> >                           flags | MPOL_MF_INVERT, &pagelist, true);
> >
> > +       if (ret == 1)
> > +               ret = -EIO;
> >         if (ret < 0) {
> >                 err = ret;
> >                 goto up_out;
> >
> > (I don't really understand this code, so it can't be this simple, can
> > it?  Why don't we just return -EIO from queue_folios_pte_range() if
> > this is the right answer?)
>
> Yeah, I'm trying to understand the expected behavior of this function
> to make sure we are not missing anything. I tried a simple fix that I
> suggested in my previous email and it works but I want to understand a
> bit more about this function's logic before posting the fix.

So, current functionality is that after queue_pages_range() encounters
an unmovable page, terminates the loop and returns 1, mbind_range()
will still be called for the whole range
(https://elixir.bootlin.com/linux/latest/source/mm/mempolicy.c#L1345),
all pages in the pagelist will be migrated
(https://elixir.bootlin.com/linux/latest/source/mm/mempolicy.c#L1355)
and only after that the -EIO code will be returned
(https://elixir.bootlin.com/linux/latest/source/mm/mempolicy.c#L1362).
So, if we follow Matthew's suggestion we will be altering the current
behavior which I assume is not what we want to do.
The simple fix I was thinking about that would not alter this behavior
is smth like this:

diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index b7d7e4fcfad7..c37a7e8be4cb 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -493,11 +493,17 @@ int walk_page_range(struct mm_struct *mm,
unsigned long start,
                 if (!vma) { /* after the last vma */
                         walk.vma = NULL;
                         next = end;
+                        if (err)
+                                continue;
+
                         if (ops->pte_hole)
                                 err = ops->pte_hole(start, next, -1, &walk);
                 } else if (start < vma->vm_start) { /* outside vma */
                         walk.vma = NULL;
                         next = min(end, vma->vm_start);
+                        if (err)
+                                continue;
+
                         if (ops->pte_hole)
                                 err = ops->pte_hole(start, next, -1, &walk);
                 } else { /* inside vma */
@@ -505,6 +511,8 @@ int walk_page_range(struct mm_struct *mm, unsigned
long start,
                         walk.vma = vma;
                         next = min(end, vma->vm_end);
                         vma = find_vma(mm, vma->vm_end);
+                        if (err)
+                                continue;

                         err = walk_page_test(start, next, &walk);
                         if (err > 0) {
@@ -520,8 +528,6 @@ int walk_page_range(struct mm_struct *mm, unsigned
long start,
                                 break;
                         err = __walk_page_range(start, next, &walk);
                 }
-                if (err)
-                        break;
         } while (start = next, start < end);
         return err;
 }

WDYT?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ