[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZV3Bwghwz63LmgMu@yilunxu-OptiPlex-7050>
Date: Wed, 22 Nov 2023 16:54:26 +0800
From: Xu Yilun <yilun.xu@...ux.intel.com>
To: Paul Durrant <paul@....org>
Cc: David Woodhouse <dwmw2@...radead.org>,
Sean Christopherson <seanjc@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v8 07/15] KVM: pfncache: include page offset in uhva and
use it consistently
On Tue, Nov 21, 2023 at 06:02:15PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@...zon.com>
>
> Currently the pfncache page offset is sometimes determined using the gpa
> and sometimes the khva, whilst the uhva is always page-aligned. After a
> subsequent patch is applied the gpa will not always be valid so adjust
> the code to include the page offset in the uhva and use it consistently
> as the source of truth.
>
> Also, where a page-aligned address is required, use PAGE_ALIGN_DOWN()
> for clarity.
>
> Signed-off-by: Paul Durrant <pdurrant@...zon.com>
> ---
> Cc: Sean Christopherson <seanjc@...gle.com>
> Cc: Paolo Bonzini <pbonzini@...hat.com>
> Cc: David Woodhouse <dwmw2@...radead.org>
>
> v8:
> - New in this version.
> ---
> virt/kvm/pfncache.c | 27 +++++++++++++++++++--------
> 1 file changed, 19 insertions(+), 8 deletions(-)
>
> diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c
> index 0eeb034d0674..c545f6246501 100644
> --- a/virt/kvm/pfncache.c
> +++ b/virt/kvm/pfncache.c
> @@ -48,10 +48,10 @@ bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, unsigned long len)
> if (!gpc->active)
> return false;
>
> - if (offset_in_page(gpc->gpa) + len > PAGE_SIZE)
> + if (gpc->generation != slots->generation || kvm_is_error_hva(gpc->uhva))
> return false;
>
> - if (gpc->generation != slots->generation || kvm_is_error_hva(gpc->uhva))
> + if (offset_in_page(gpc->uhva) + len > PAGE_SIZE)
> return false;
>
> if (!gpc->valid)
> @@ -119,7 +119,7 @@ static inline bool mmu_notifier_retry_cache(struct kvm *kvm, unsigned long mmu_s
> static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc)
> {
> /* Note, the new page offset may be different than the old! */
> - void *old_khva = gpc->khva - offset_in_page(gpc->khva);
> + void *old_khva = (void *)PAGE_ALIGN_DOWN((uintptr_t)gpc->khva);
> kvm_pfn_t new_pfn = KVM_PFN_ERR_FAULT;
> void *new_khva = NULL;
> unsigned long mmu_seq;
> @@ -192,7 +192,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc)
>
> gpc->valid = true;
> gpc->pfn = new_pfn;
> - gpc->khva = new_khva + offset_in_page(gpc->gpa);
> + gpc->khva = new_khva + offset_in_page(gpc->uhva);
>
> /*
> * Put the reference to the _new_ pfn. The pfn is now tracked by the
> @@ -215,8 +215,8 @@ static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa,
> struct kvm_memslots *slots = kvm_memslots(gpc->kvm);
> unsigned long page_offset = offset_in_page(gpa);
> bool unmap_old = false;
> - unsigned long old_uhva;
> kvm_pfn_t old_pfn;
> + bool hva_change = false;
> void *old_khva;
> int ret;
>
> @@ -242,8 +242,7 @@ static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa,
> }
>
> old_pfn = gpc->pfn;
> - old_khva = gpc->khva - offset_in_page(gpc->khva);
> - old_uhva = gpc->uhva;
> + old_khva = (void *)PAGE_ALIGN_DOWN((uintptr_t)gpc->khva);
>
> /* If the userspace HVA is invalid, refresh that first */
> if (gpc->gpa != gpa || gpc->generation != slots->generation ||
> @@ -259,13 +258,25 @@ static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa,
> ret = -EFAULT;
> goto out;
> }
> +
> + hva_change = true;
> + } else {
> + /*
> + * No need to do any re-mapping if the only thing that has
> + * changed is the page offset. Just page align it to allow the
> + * new offset to be added in.
I don't understand how the uhva('s offset) could be changed when both gpa and
slot are not changed. Maybe I have no knowledge of xen, but in later
patch you said your uhva would never change...
Thanks,
Yilun
> + */
> + gpc->uhva = PAGE_ALIGN_DOWN(gpc->uhva);
> }
>
> + /* Note: the offset must be correct before calling hva_to_pfn_retry() */
> + gpc->uhva += page_offset;
> +
> /*
> * If the userspace HVA changed or the PFN was already invalid,
> * drop the lock and do the HVA to PFN lookup again.
> */
> - if (!gpc->valid || old_uhva != gpc->uhva) {
> + if (!gpc->valid || hva_change) {
> ret = hva_to_pfn_retry(gpc);
> } else {
> /*
> --
> 2.39.2
>
>
Powered by blists - more mailing lists