[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <YrZAZXHJTsUp8yuP@google.com>
Date: Fri, 24 Jun 2022 22:53:25 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Hou Wenlong <houwenlong.hwl@...group.com>
Cc: kvm@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>,
Lan Tianyu <Tianyu.Lan@...rosoft.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/5] KVM: x86/mmu: Fix wrong gfn range of tlb flushing in
kvm_set_pte_rmapp()
On Fri, Jun 24, 2022, Hou Wenlong wrote:
> When the spte of hupe page is dropped in kvm_set_pte_rmapp(),
> the whole gfn range covered by the spte should be flushed.
> However, rmap_walk_init_level() doesn't align down the gfn
> for new level like tdp iterator does, then the gfn used in
> kvm_set_pte_rmapp() is not the base gfn of huge page. And
> the size of gfn range is wrong too for huge page. Since
> the base gfn of huge page is more meaningful during the
> rmap walking, so align down the gfn for new level and use
> the correct size of huge page for tlb flushing in
> kvm_set_pte_rmapp().
It's also worth noting that kvm_set_pte_rmapp() is the other user of the rmap
iterators that consumes @gfn, i.e. modifying iterator->gfn is safe-ish.
> Fixes: c3134ce240eed ("KVM: Replace old tlb flush function with new one to flush a specified range.")
> Signed-off-by: Hou Wenlong <houwenlong.hwl@...group.com>
> ---
> arch/x86/kvm/mmu/mmu.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index b8a1f5b46b9d..37bfc88ea212 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -1427,7 +1427,7 @@ static bool kvm_set_pte_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
> }
>
> if (need_flush && kvm_available_flush_tlb_with_range()) {
> - kvm_flush_remote_tlbs_with_address(kvm, gfn, 1);
> + kvm_flush_remote_tlbs_with_address(kvm, gfn, KVM_PAGES_PER_HPAGE(level));
> return false;
> }
>
> @@ -1455,7 +1455,7 @@ static void
> rmap_walk_init_level(struct slot_rmap_walk_iterator *iterator, int level)
> {
> iterator->level = level;
> - iterator->gfn = iterator->start_gfn;
> + iterator->gfn = iterator->start_gfn & -KVM_PAGES_PER_HPAGE(level);
Hrm, arguably this be done on start_gfn in slot_rmap_walk_init(). Having iter->gfn
be less than iter->start_gfn will be odd.
> iterator->rmap = gfn_to_rmap(iterator->gfn, level, iterator->slot);
> iterator->end_rmap = gfn_to_rmap(iterator->end_gfn, level, iterator->slot);
> }
> --
> 2.31.1
>
Powered by blists - more mailing lists