[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d6670aa7-37ee-85aa-1053-96284a2f6720@redhat.com>
Date: Wed, 8 Mar 2023 10:41:10 +0100
From: David Hildenbrand <david@...hat.com>
To: "Liam R. Howlett" <Liam.Howlett@...cle.com>,
maple-tree@...ts.infradead.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Pengfei Xu <pengfei.xu@...el.com>,
syzbot+2ee18845e89ae76342c5@...kaller.appspotmail.com,
Matthew Wilcox <willy@...radead.org>, heng.su@...el.com,
lkp@...el.com, Stable@...r.kernel.org
Subject: Re: [PATCH] mm/ksm: Fix race with ksm_exit() in VMA iteration
On 07.03.23 21:59, Liam R. Howlett wrote:
> ksm_exit() may remove the mm from the ksm_scan between the unlocking of
> the ksm_mmlist and the start of the VMA iteration. This results in the
> mmap_read_lock() not being taken and a report from lockdep that the mm
> isn't locked in the maple tree code.
I'm confused. The code does
mmap_read_lock(mm);
...
for_each_vma(vmi, vma) {
mmap_read_unlock(mm);
How can we not take the mmap_read_lock() ? Or am I staring at the wrong
mmap_read_lock() ?
>
> Fix the race by checking if this mm has been removed before iterating
> the VMAs. __ksm_exit() uses the mmap lock to synchronize the freeing of
> an mm, so it is safe to keep iterating over the VMAs when it is going to
> be freed.
>
> This change will slow down the mm exit during the race condition, but
> will speed up the non-race scenarios iteration over the VMA list, which
> should be much more common.
Would leaving the existing check in help to just stop scanning faster in
that case?
>
> Reported-by: Pengfei Xu <pengfei.xu@...el.com>
> Link: https://lore.kernel.org/lkml/ZAdUUhSbaa6fHS36@xpf.sh.intel.com/
> Reported-by: syzbot+2ee18845e89ae76342c5@...kaller.appspotmail.com
> Link: https://syzkaller.appspot.com/bug?id=64a3e95957cd3deab99df7cd7b5a9475af92c93e
> Cc: linux-mm@...ck.org
> Cc: linux-kernel@...r.kernel.org
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Matthew Wilcox (Oracle) <willy@...radead.org>
> Cc: heng.su@...el.com
> Cc: lkp@...el.com
> Cc: <Stable@...r.kernel.org>
> Fixes: a5f18ba07276 ("mm/ksm: use vma iterators instead of vma linked list")
> Signed-off-by: Liam R. Howlett <Liam.Howlett@...cle.com>
> ---
> mm/ksm.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 525c3306e78b..723ddbe6ea97 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -1044,9 +1044,10 @@ static int unmerge_and_remove_all_rmap_items(void)
>
> mm = mm_slot->slot.mm;
> mmap_read_lock(mm);
Better add a comment:
/*
* Don't iterate any VMAs if we might be racing against ksm_exit(),
* just exit early.
*/
> + if (ksm_test_exit(mm))
> + goto mm_exiting;
> +
> for_each_vma(vmi, vma) {
> - if (ksm_test_exit(mm))
> - break;
> if (!(vma->vm_flags & VM_MERGEABLE) || !vma->anon_vma)
> continue;
> err = unmerge_ksm_pages(vma,
> @@ -1055,6 +1056,7 @@ static int unmerge_and_remove_all_rmap_items(void)
> goto error;
> }
>
> +mm_exiting:
> remove_trailing_rmap_items(&mm_slot->rmap_list);
> mmap_read_unlock(mm);
>
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists