[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230308161927.lb4npblk2q4vkxmg@revolver>
Date: Wed, 8 Mar 2023 11:19:27 -0500
From: "Liam R. Howlett" <Liam.Howlett@...cle.com>
To: David Hildenbrand <david@...hat.com>
Cc: maple-tree@...ts.infradead.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Pengfei Xu <pengfei.xu@...el.com>,
syzbot+2ee18845e89ae76342c5@...kaller.appspotmail.com,
Matthew Wilcox <willy@...radead.org>, heng.su@...el.com,
lkp@...el.com, Stable@...r.kernel.org,
Hugh Dickins <hughd@...gle.com>
Subject: Re: [PATCH] mm/ksm: Fix race with ksm_exit() in VMA iteration
* David Hildenbrand <david@...hat.com> [230308 04:41]:
> On 07.03.23 21:59, Liam R. Howlett wrote:
> > ksm_exit() may remove the mm from the ksm_scan between the unlocking of
> > the ksm_mmlist and the start of the VMA iteration. This results in the
> > mmap_read_lock() not being taken and a report from lockdep that the mm
> > isn't locked in the maple tree code.
>
> I'm confused.
Thanks for looking at this. My explanation is incorrect.
>The code does
>
> mmap_read_lock(mm);
> ...
> for_each_vma(vmi, vma) {
> mmap_read_unlock(mm);
>
> How can we not take the mmap_read_lock() ? Or am I staring at the wrong
> mmap_read_lock() ?
That's the right one. The mmap lock is taken, but the one we are
checking is not the correct one. Let me try again.
Checking the mm struct against the one in the vmi confirms they are the
same, so lockdep is telling us the lock we took doesn't match what it
expected. I verified that the lock is the same before the
'for_each_vma()' call by inserting a BUG_ON() which is never triggered
with the reproducer.
ksm_test_exit() uses the mm->mm_users atomic to detect an mm exit. This
is usually done in mmget(), mmput(), and friends.
__ksm_exit() and unmerge_and_remove_all_rmap_items() handle freeing by
use of the mm->mm_count atomic. This is usually via mmgrab() and mmdrop().
mmput() will call __mmput() if mm_users is decremented to zero.
__mmput() calls mmdrop() after the ksm_exit() and then continue with
teardown.
So, I believe what is happening is that the external lock flag is being
cleared from the maple tree (the one lockdep checks) before we call the
iterator.
task 1 task 2
unmerge_and_remove_all_rmap_items()
spin_lock(&ksm_mmlist_lock);
ksm_scan.mm_slot is set
spin_unlock(&ksm_mmlist_lock);
=======================================================================
At this point mm->mm_users is 0, but mm_count is not as it will
be decremented at the end of __mmput().
=======================================================================
__mmput()
ksm_exit()
__ksm_exit()
spin_lock(&ksm_mmlist_lock);
mm_slot is set
spin_unlock(&ksm_mmlist_lock)
mm_slot == ksm_scan.mm_slot
mmap_write_lock();
mmap_write_unlock();
return
exit_mmap()
...
mmap_write_lock();
__mt_destory()
Free all maple tree nodes
mt->flags = 0;
mmap_write_unlock();
...
mmap_read_lock()
for_each_vma()
lockdep checks *internal* spinlock
This was fine before the change as the previous for loop would not have
checked the locking and would have hit the ksm_test_exit() test before
any problem arose.
Now we are getting a lockdep warning because the maple tree flag for the
external lock is cleared.
How about this as the start to the commit message:
The VMA iterator may trigger a lockdep warning if the mm is in the
process of being cleaned up before obtaining the mmap_read_lock().
>
> >
> > Fix the race by checking if this mm has been removed before iterating
> > the VMAs. __ksm_exit() uses the mmap lock to synchronize the freeing of
> > an mm, so it is safe to keep iterating over the VMAs when it is going to
> > be freed.
> >
> > This change will slow down the mm exit during the race condition, but
> > will speed up the non-race scenarios iteration over the VMA list, which
> > should be much more common.
>
> Would leaving the existing check in help to just stop scanning faster in
> that case?
Yes. But why? We would stop the scanning faster in the race condition
case, but slow the normal case down.
This check was here to ensure that the mm isn't being torn down while
it's iterating over the loop. Hugh (Cc'ed) added this in 2009, but the
fundamental problem he specifies in his commit message in 9ba692948008
("ksm: fix oom deadlock") is that exit_mmap() does not take the
mmap_lock() - which is no longer the case. We are safe to iterate the
VMAs with the mmap_read_lock() as the mmap_write_lock() is taken during
tear down of the VMA tree today.
>
> >
> > Reported-by: Pengfei Xu <pengfei.xu@...el.com>
> > Link: https://lore.kernel.org/lkml/ZAdUUhSbaa6fHS36@xpf.sh.intel.com/
> > Reported-by: syzbot+2ee18845e89ae76342c5@...kaller.appspotmail.com
> > Link: https://syzkaller.appspot.com/bug?id=64a3e95957cd3deab99df7cd7b5a9475af92c93e
> > Cc: linux-mm@...ck.org
> > Cc: linux-kernel@...r.kernel.org
> > Cc: Andrew Morton <akpm@...ux-foundation.org>
> > Cc: Matthew Wilcox (Oracle) <willy@...radead.org>
> > Cc: heng.su@...el.com
> > Cc: lkp@...el.com
> > Cc: <Stable@...r.kernel.org>
> > Fixes: a5f18ba07276 ("mm/ksm: use vma iterators instead of vma linked list")
> > Signed-off-by: Liam R. Howlett <Liam.Howlett@...cle.com>
> > ---
> > mm/ksm.c | 6 ++++--
> > 1 file changed, 4 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/ksm.c b/mm/ksm.c
> > index 525c3306e78b..723ddbe6ea97 100644
> > --- a/mm/ksm.c
> > +++ b/mm/ksm.c
> > @@ -1044,9 +1044,10 @@ static int unmerge_and_remove_all_rmap_items(void)
> > mm = mm_slot->slot.mm;
> > mmap_read_lock(mm);
>
> Better add a comment:
>
> /*
> * Don't iterate any VMAs if we might be racing against ksm_exit(),
> * just exit early.
> */
>
> > + if (ksm_test_exit(mm))
> > + goto mm_exiting;
> > +
> > for_each_vma(vmi, vma) {
> > - if (ksm_test_exit(mm))
> > - break;
> > if (!(vma->vm_flags & VM_MERGEABLE) || !vma->anon_vma)
> > continue;
> > err = unmerge_ksm_pages(vma,
> > @@ -1055,6 +1056,7 @@ static int unmerge_and_remove_all_rmap_items(void)
> > goto error;
> > }
> > +mm_exiting:
> > remove_trailing_rmap_items(&mm_slot->rmap_list);
> > mmap_read_unlock(mm);
>
>
>
> --
> Thanks,
>
> David / dhildenb
>
Powered by blists - more mailing lists