[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7eb2849e-ad6d-11c5-a37d-806a1c62bb3e@oracle.com>
Date: Tue, 28 Dec 2021 19:52:37 -0800
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
syzbot+4e697fe80a31aa7efe21@...kaller.appspotmail.com,
kvm@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] hugetlbfs: Fix off-by-one error in
hugetlb_vmdelete_list()
+Cc Andrew if he wants to take it though his tree.
On 12/28/21 15:42, Sean Christopherson wrote:
> Pass "end - 1" instead of "end" when walking the interval tree in
> hugetlb_vmdelete_list() to fix an inclusive vs. exclusive bug. The two
> callers that pass a non-zero "end" treat it as exclusive, whereas the
> interval tree iterator expects an inclusive "last". E.g. punching a hole
> in a file that precisely matches the size of a single hugepage, with a
> vma starting right on the boundary, will result in unmap_hugepage_range()
> being called twice, with the second call having start==end.
>
> The off-by-one error doesn't cause functional problems as
> __unmap_hugepage_range() turns into a massive nop due to short-circuiting
> its for-loop on "address < end". But, the mmu_notifier invocations to
> invalid_range_{start,end}() are passed a bogus zero-sized range, which
> may be unexpected behavior for secondary MMUs.
>
> The bug was exposed by commit ed922739c919 ("KVM: Use interval tree to do
> fast hva lookup in memslots"), currently queued in the KVM tree for 5.17,
> which added a WARN to detect ranges with start==end.
>
> Reported-by: syzbot+4e697fe80a31aa7efe21@...kaller.appspotmail.com
> Fixes: 1bfad99ab425 ("hugetlbfs: hugetlb_vmtruncate_list() needs to take a range to delete")
> Cc: kvm@...r.kernel.org
> Cc: Paolo Bonzini <pbonzini@...hat.com>
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
Thanks Sean!
Reviewed-by: Mike Kravetz <mike.kravetz@...cle.com>
> ---
>
> Not sure if this should go to stable@. It's mostly harmless, and likely
> nothing more than a minor performance blip when it's not harmless.
I am also unsure about the need to send to stable. It is possible automation
will pick it up and make that decision for us.
--
Mike Kravetz
Powered by blists - more mailing lists