[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <30c5f2f3-ff1e-4b0d-bb1f-eba95f578376@redhat.com>
Date: Sat, 30 Mar 2024 21:51:49 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Michael Roth <michael.roth@....com>, kvm@...r.kernel.org
Cc: linux-coco@...ts.linux.dev, linux-mm@...ck.org,
linux-crypto@...r.kernel.org, x86@...nel.org, linux-kernel@...r.kernel.org,
tglx@...utronix.de, mingo@...hat.com, jroedel@...e.de,
thomas.lendacky@....com, hpa@...or.com, ardb@...nel.org, seanjc@...gle.com,
vkuznets@...hat.com, jmattson@...gle.com, luto@...nel.org,
dave.hansen@...ux.intel.com, slp@...hat.com, pgonda@...gle.com,
peterz@...radead.org, srinivas.pandruvada@...ux.intel.com,
rientjes@...gle.com, dovmurik@...ux.ibm.com, tobin@....com, bp@...en8.de,
vbabka@...e.cz, kirill@...temov.name, ak@...ux.intel.com,
tony.luck@...el.com, sathyanarayanan.kuppuswamy@...ux.intel.com,
alpergun@...gle.com, jarkko@...nel.org, ashish.kalra@....com,
nikunj.dadhania@....com, pankaj.gupta@....com, liam.merwick@...cle.com,
Brijesh Singh <brijesh.singh@....com>
Subject: Re: [PATCH v12 16/29] KVM: x86: Export the kvm_zap_gfn_range() for
the SNP use
On 3/29/24 23:58, Michael Roth wrote:
> From: Brijesh Singh <brijesh.singh@....com>
>
> While resolving the RMP page fault, there may be cases where the page
> level between the RMP entry and TDP does not match and the 2M RMP entry
> must be split into 4K RMP entries. Or a 2M TDP page need to be broken
> into multiple of 4K pages.
>
> To keep the RMP and TDP page level in sync, zap the gfn range after
> splitting the pages in the RMP entry. The zap should force the TDP to
> gets rebuilt with the new page level.
Just squash this in patch 17.
Paolo
> Signed-off-by: Brijesh Singh <brijesh.singh@....com>
> Signed-off-by: Ashish Kalra <ashish.kalra@....com>
> Signed-off-by: Michael Roth <michael.roth@....com>
> ---
> arch/x86/include/asm/kvm_host.h | 1 +
> arch/x86/kvm/mmu.h | 2 --
> arch/x86/kvm/mmu/mmu.c | 1 +
> 3 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index a3f8eba8d8b6..49b294a8d917 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1950,6 +1950,7 @@ void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm,
> const struct kvm_memory_slot *memslot);
> void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen);
> void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned long kvm_nr_mmu_pages);
> +void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end);
>
> int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3);
>
> diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
> index 2c54ba5b0a28..89da37be241a 100644
> --- a/arch/x86/kvm/mmu.h
> +++ b/arch/x86/kvm/mmu.h
> @@ -253,8 +253,6 @@ static inline bool kvm_mmu_honors_guest_mtrrs(struct kvm *kvm)
> return __kvm_mmu_honors_guest_mtrrs(kvm_arch_has_noncoherent_dma(kvm));
> }
>
> -void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end);
> -
> int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu);
>
> int kvm_mmu_post_init_vm(struct kvm *kvm);
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 0049d49aa913..c5af52e3f0c5 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -6772,6 +6772,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,
>
> return need_tlb_flush;
> }
> +EXPORT_SYMBOL_GPL(kvm_zap_gfn_range);
>
> static void kvm_rmap_zap_collapsible_sptes(struct kvm *kvm,
> const struct kvm_memory_slot *slot)
Powered by blists - more mailing lists