[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230705115653.5whvhvcvmflqmcse@linux.intel.com>
Date: Wed, 5 Jul 2023 19:56:53 +0800
From: Yu Zhang <yu.c.zhang@...ux.intel.com>
To: David Stevens <stevensd@...omium.org>
Cc: Sean Christopherson <seanjc@...gle.com>,
Marc Zyngier <maz@...nel.org>,
Michael Ellerman <mpe@...erman.id.au>,
Peter Xu <peterx@...hat.com>,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
kvm@...r.kernel.org
Subject: Re: [PATCH v7 3/8] KVM: Make __kvm_follow_pfn not imply FOLL_GET
On Tue, Jul 04, 2023 at 04:50:48PM +0900, David Stevens wrote:
> From: David Stevens <stevensd@...omium.org>
>
> Make it so that __kvm_follow_pfn does not imply FOLL_GET. This allows
> callers to resolve a gfn when the associated pfn has a valid struct page
> that isn't being actively refcounted (e.g. tail pages of non-compound
> higher order pages). For a caller to safely omit FOLL_GET, all usages of
> the returned pfn must be guarded by a mmu notifier.
>
> This also adds a is_refcounted_page out parameter to kvm_follow_pfn that
> is set when the returned pfn has an associated struct page with a valid
> refcount. Callers that don't pass FOLL_GET should remember this value
> and use it to avoid places like kvm_is_ad_tracked_page that assume a
> non-zero refcount.
>
> Signed-off-by: David Stevens <stevensd@...omium.org>
> ---
> include/linux/kvm_host.h | 10 ++++++
> virt/kvm/kvm_main.c | 67 +++++++++++++++++++++-------------------
> virt/kvm/pfncache.c | 2 +-
> 3 files changed, 47 insertions(+), 32 deletions(-)
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index ef2763c2b12e..a45308c7d2d9 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1157,6 +1157,9 @@ unsigned long gfn_to_hva_memslot_prot(struct kvm_memory_slot *slot, gfn_t gfn,
> void kvm_release_page_clean(struct page *page);
> void kvm_release_page_dirty(struct page *page);
>
> +void kvm_set_page_accessed(struct page *page);
> +void kvm_set_page_dirty(struct page *page);
> +
> struct kvm_follow_pfn {
> const struct kvm_memory_slot *slot;
> gfn_t gfn;
> @@ -1164,10 +1167,17 @@ struct kvm_follow_pfn {
> bool atomic;
> /* Allow a read fault to create a writeable mapping. */
> bool allow_write_mapping;
> + /*
> + * Usage of the returned pfn will be guared by a mmu notifier. Must
> + * be true if FOLL_GET is not set.
> + */
> + bool guarded_by_mmu_notifier;
And how? Any place to check the invalidate seq?
B.R.
Yu
Powered by blists - more mailing lists