[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8c5f3503-860d-b3c0-4838-0a4a04f64a47@redhat.com>
Date: Mon, 21 Dec 2020 19:31:51 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Lai Jiangshan <jiangshanlai@...il.com>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc: Sean Christopherson <seanjc@...gle.com>,
Lai Jiangshan <laijs@...ux.alibaba.com>, stable@...r.kernel.org
Subject: Re: [PATCH V3] kvm: check tlbs_dirty directly
On 17/12/20 16:41, Lai Jiangshan wrote:
> From: Lai Jiangshan <laijs@...ux.alibaba.com>
>
> In kvm_mmu_notifier_invalidate_range_start(), tlbs_dirty is used as:
> need_tlb_flush |= kvm->tlbs_dirty;
> with need_tlb_flush's type being int and tlbs_dirty's type being long.
>
> It means that tlbs_dirty is always used as int and the higher 32 bits
> is useless. We need to check tlbs_dirty in a correct way and this
> change checks it directly without propagating it to need_tlb_flush.
>
> Note: it's _extremely_ unlikely this neglecting of higher 32 bits can
> cause problems in practice. It would require encountering tlbs_dirty
> on a 4 billion count boundary, and KVM would need to be using shadow
> paging or be running a nested guest.
>
> Cc: stable@...r.kernel.org
> Fixes: a4ee1ca4a36e ("KVM: MMU: delay flush all tlbs on sync_page path")
> Signed-off-by: Lai Jiangshan <laijs@...ux.alibaba.com>
> ---
> Changed from V1:
> Update the patch and the changelog as Sean Christopherson suggested.
>
> Changed from v2:
> don't change the type of need_tlb_flush
>
> virt/kvm/kvm_main.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 2541a17ff1c4..3083fb53861d 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -482,9 +482,8 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
> kvm->mmu_notifier_count++;
> need_tlb_flush = kvm_unmap_hva_range(kvm, range->start, range->end,
> range->flags);
> - need_tlb_flush |= kvm->tlbs_dirty;
> /* we've to flush the tlb before the pages can be freed */
> - if (need_tlb_flush)
> + if (need_tlb_flush || kvm->tlbs_dirty)
> kvm_flush_remote_tlbs(kvm);
>
> spin_unlock(&kvm->mmu_lock);
>
Queued, thanks.
Paolo
Powered by blists - more mailing lists