[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpECwpQ8wHnwhkLztvvxZmP9rH+aW3A39BSzkZ9t2JK6dQ@mail.gmail.com>
Date: Thu, 14 Sep 2023 18:20:56 +0000
From: Suren Baghdasaryan <surenb@...gle.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: syzbot <syzbot+b591856e0f0139f83023@...kaller.appspotmail.com>,
akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] [mm?] kernel BUG in vma_replace_policy
On Wed, Sep 13, 2023 at 4:46 PM Suren Baghdasaryan <surenb@...gle.com> wrote:
>
> On Wed, Sep 13, 2023 at 4:05 PM Suren Baghdasaryan <surenb@...gle.com> wrote:
> >
> > On Tue, Sep 12, 2023 at 4:00 PM Suren Baghdasaryan <surenb@...gle.com> wrote:
> > >
> > > On Tue, Sep 12, 2023 at 8:03 AM Suren Baghdasaryan <surenb@...gle.com> wrote:
> > > >
> > > > On Tue, Sep 12, 2023 at 7:55 AM Matthew Wilcox <willy@...radead.org> wrote:
> > > > >
> > > > > On Tue, Sep 12, 2023 at 06:30:46AM +0100, Matthew Wilcox wrote:
> > > > > > On Tue, Sep 05, 2023 at 06:03:49PM -0700, syzbot wrote:
> > > > > > > Hello,
> > > > > > >
> > > > > > > syzbot found the following issue on:
> > > > > > >
> > > > > > > HEAD commit: a47fc304d2b6 Add linux-next specific files for 20230831
> > > > > > > git tree: linux-next
> > > > > > > console+strace: https://syzkaller.appspot.com/x/log.txt?x=16502ddba80000
> > > > > > > kernel config: https://syzkaller.appspot.com/x/.config?x=6ecd2a74f20953b9
> > > > > > > dashboard link: https://syzkaller.appspot.com/bug?extid=b591856e0f0139f83023
> > > > > > > compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
> > > > > > > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=120e7d70680000
> > > > > > > C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1523f9c0680000
> > > > > > >
> > > > > > > Downloadable assets:
> > > > > > > disk image: https://storage.googleapis.com/syzbot-assets/b2e8f4217527/disk-a47fc304.raw.xz
> > > > > > > vmlinux: https://storage.googleapis.com/syzbot-assets/ed6cdcc09339/vmlinux-a47fc304.xz
> > > > > > > kernel image: https://storage.googleapis.com/syzbot-assets/bd9b2475bf5a/bzImage-a47fc304.xz
> > > > > > >
> > > > > > > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > > > > > > Reported-by: syzbot+b591856e0f0139f83023@...kaller.appspotmail.com
> > > > > >
> > > > > > #syz test
> > > > > >
> > > > > > diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> > > > > > index 42b5567e3773..90ad5fe60824 100644
> > > > > > --- a/mm/mempolicy.c
> > > > > > +++ b/mm/mempolicy.c
> > > > > > @@ -1342,6 +1342,7 @@ static long do_mbind(unsigned long start, unsigned long len,
> > > > > > vma_iter_init(&vmi, mm, start);
> > > > > > prev = vma_prev(&vmi);
> > > > > > for_each_vma_range(vmi, vma, end) {
> > > > > > + vma_start_write(vma);
> > > > > > err = mbind_range(&vmi, vma, &prev, start, end, new);
> > > > > > if (err)
> > > > > > break;
> > > > >
> > > > > Suren, can you take a look at this? The VMA should be locked by the
> > > > > call to queue_pages_range(), but by the time we get to here, the VMA
> > > > > isn't locked. I don't see anywhere that we cycle the mmap_lock (which
> > > > > would unlock the VMA), but I could have missed something. The two
> > > > > VMA walks should walk over the same set of VMAs. Certainly the VMA
> > > > > being dumped should have been locked by the pagewalk:
> > >
> > > Yeah, this looks strange. queue_pages_range() should have locked all
> > > the vmas and the tree can't change since we are holding mmap_lock for
> > > write. I'll try to reproduce later today and see what's going on.
> >
> > So far I was unable to reproduce the issue. I tried with Linus' ToT
> > using the attached config. linux-next ToT does not boot with this
> > config but defconfig boots and fails to reproduce the issue. I'll try
> > to figure out why current linux-next does not like this config.
>
> Ok, I found a way to reproduce this using the config and kernel
> baseline reported on 2023/09/06 06:24 at
> https://syzkaller.appspot.com/bug?extid=b591856e0f0139f83023. I
> suspect mmap_lock is being dropped by a racing thread, similar to this
> issue we fixed before here:
> https://lore.kernel.org/all/CAJuCfpH8ucOkCFYrVZafUAppi5+mVhy=uD+BK6-oYX=ysQv5qQ@mail.gmail.com/
> Anyway, I'm on it and will report once I figure out the issue.
I think I found the problem and the explanation is much simpler. While
walking the page range, queue_folios_pte_range() encounters an
unmovable page and queue_folios_pte_range() returns 1. That causes a
break from the loop inside walk_page_range() and no more VMAs get
locked. After that the loop calling mbind_range() walks over all VMAs,
even the ones which were skipped by queue_folios_pte_range() and that
causes this BUG assertion.
Thinking what's the right way to handle this situation (what's the
expected behavior here)...
I think the safest way would be to modify walk_page_range() and make
it continue calling process_vma_walk_lock() for all VMAs in the range
even when __walk_page_range() returns a positive err. Any objection or
alternative suggestions?
>
> >
> > >
> > > >
> > > > Sure, I'll look into this today. Somehow this report slipped by me
> > > > unnoticed. Thanks!
> > > >
> > > > >
> > > > > vma ffff888077381a00 start 0000000020c2a000 end 0000000021000000 mm ffff8880258a8980
> > > > > prot 25 anon_vma 0000000000000000 vm_ops 0000000000000000
> > > > > pgoff 20c2a file 0000000000000000 private_data 0000000000000000
> > > > > flags: 0x8100077(read|write|exec|mayread|maywrite|mayexec|account|softdirty)
> > > > >
> > > > > syscall(__NR_mbind, /*addr=*/0x20400000ul, /*len=*/0xc00000ul, /*mode=*/4ul,
> > > > > /*nodemask=*/0ul, /*maxnode=*/0ul, /*flags=*/3ul);
> > > > >
> > > > > 20400000 + c00000 should overlap 20c2a000-21000000
Powered by blists - more mailing lists