[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c5c51578-efdc-7de-2238-4039fb1b6c36@google.com>
Date: Fri, 15 Sep 2023 19:43:58 -0700 (PDT)
From: Hugh Dickins <hughd@...gle.com>
To: Suren Baghdasaryan <surenb@...gle.com>
cc: Hugh Dickins <hughd@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
Yang Shi <shy828301@...il.com>, Michal Hocko <mhocko@...e.com>,
Vlastimil Babka <vbabka@...e.cz>,
syzbot <syzbot+b591856e0f0139f83023@...kaller.appspotmail.com>,
akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] [mm?] kernel BUG in vma_replace_policy
On Fri, 15 Sep 2023, Suren Baghdasaryan wrote:
> On Fri, Sep 15, 2023 at 9:09 AM Suren Baghdasaryan <surenb@...gle.com> wrote:
> >
> > Thanks for the feedback, Hugh!
> > Yeah, this positive err handling is kinda weird. If this behavior (do
> > as much as possible even if we fail eventually) is specific to mbind()
> > then we could keep walk_page_range() as is and lock the VMAs inside
> > the loop that calls mbind_range() with a condition that ret is
> > positive. That would be the simplest solution IMHO. But if we expect
> > walk_page_range() to always apply requested page_walk_lock policy to
> > all VMAs even if some mm_walk_ops returns a positive error somewhere
> > in the middle of the walk then my fix would work for that. So, to me
> > the important question is how we want walk_page_range() to behave in
> > these conditions. I think we should answer that first and document
> > that. Then the fix will be easy.
>
> I looked at all the cases where we perform page walk while locking
> VMAs and mbind() seems to be the only one that would require
> walk_page_range() to lock all VMAs even for a failed walk.
Yes, I can well believe that.
> So, I suggest this fix instead and I can also document that if
> walk_page_range() fails it might not apply page_walk_lock policy to
> the VMAs.
>
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 42b5567e3773..cbc584e9b6ca 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -1342,6 +1342,9 @@ static long do_mbind(unsigned long start,
> unsigned long len,
> vma_iter_init(&vmi, mm, start);
> prev = vma_prev(&vmi);
> for_each_vma_range(vmi, vma, end) {
> + /* If queue_pages_range failed then not all VMAs
> might be locked */
> + if (ret)
> + vma_start_write(vma);
> err = mbind_range(&vmi, vma, &prev, start, end, new);
> if (err)
> break;
>
> If this looks good I'll post the patch. Matthew, Hugh, anyone else?
Yes, I do prefer this, to adding those pos ret mods into the generic
pagewalk. The "if (ret)" above being just a minor optimization, that
I would probably not have bothered with (does it even save any atomics?)
- but I guess it helps as documentation.
I think it's quite likely that mbind() will be changed sooner or later
not to need this; but it's much the best to fix this vma locking issue
urgently as above, without depending on any mbind() behavioral discussions.
Thanks,
Hugh
Powered by blists - more mailing lists