[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6a82ee1a-b0d0-b6c2-21e9-8d8e2d2b6827@amd.com>
Date: Wed, 7 Aug 2024 09:25:42 +0200
From: "Gupta, Pankaj" <pankaj.gupta@....com>
To: Sean Christopherson <seanjc@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
David Matlack <dmatlack@...gle.com>
Subject: Re: [PATCH v2] KVM: x86/mmu: Clean up function comments for dirty
logging APIs
> Rework the function comment for kvm_arch_mmu_enable_log_dirty_pt_masked()
> into the body of the function, as it has gotten a bit stale, is harder to
> read without the code context, and is the last source of warnings for W=1
> builds in KVM x86 due to using a kernel-doc comment without documenting
> all parameters.
>
> Opportunistically subsume the functions comments for
> kvm_mmu_write_protect_pt_masked() and kvm_mmu_clear_dirty_pt_masked(), as
> there is no value in regurgitating similar information at a higher level,
> and capturing the differences between write-protection and PML-based dirty
> logging is best done in a common location.
>
> No functional change intended.
>
> Cc: David Matlack <dmatlack@...gle.com>
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> ---
>
> v2: Put the comments in the function body. [David]
>
> v1: https://lore.kernel.org/all/20240611215805.340664-1-seanjc@google.com
>
> arch/x86/kvm/mmu/mmu.c | 48 +++++++++++++-----------------------------
> 1 file changed, 15 insertions(+), 33 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 901be9e420a4..45e7e9bd5e76 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -1307,15 +1307,6 @@ static bool __rmap_clear_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
> return flush;
> }
>
> -/**
> - * kvm_mmu_write_protect_pt_masked - write protect selected PT level pages
> - * @kvm: kvm instance
> - * @slot: slot to protect
> - * @gfn_offset: start of the BITS_PER_LONG pages we care about
> - * @mask: indicates which pages we should protect
> - *
> - * Used when we do not need to care about huge page mappings.
> - */
> static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
> struct kvm_memory_slot *slot,
> gfn_t gfn_offset, unsigned long mask)
> @@ -1339,16 +1330,6 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
> }
> }
>
> -/**
> - * kvm_mmu_clear_dirty_pt_masked - clear MMU D-bit for PT level pages, or write
> - * protect the page if the D-bit isn't supported.
> - * @kvm: kvm instance
> - * @slot: slot to clear D-bit
> - * @gfn_offset: start of the BITS_PER_LONG pages we care about
> - * @mask: indicates which pages we should clear D-bit
> - *
> - * Used for PML to re-log the dirty GPAs after userspace querying dirty_bitmap.
> - */
> static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm,
> struct kvm_memory_slot *slot,
> gfn_t gfn_offset, unsigned long mask)
> @@ -1372,24 +1353,16 @@ static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm,
> }
> }
>
> -/**
> - * kvm_arch_mmu_enable_log_dirty_pt_masked - enable dirty logging for selected
> - * PT level pages.
> - *
> - * It calls kvm_mmu_write_protect_pt_masked to write protect selected pages to
> - * enable dirty logging for them.
> - *
> - * We need to care about huge page mappings: e.g. during dirty logging we may
> - * have such mappings.
> - */
> void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
> struct kvm_memory_slot *slot,
> gfn_t gfn_offset, unsigned long mask)
> {
> /*
> - * Huge pages are NOT write protected when we start dirty logging in
> - * initially-all-set mode; must write protect them here so that they
> - * are split to 4K on the first write.
> + * If the slot was assumed to be "initially all dirty", write-protect
> + * huge pages to ensure they are split to 4KiB on the first write (KVM
> + * dirty logs at 4KiB granularity). If eager page splitting is enabled,
> + * immediately try to split huge pages, e.g. so that vCPUs don't get
> + * saddled with the cost of splitting.
> *
> * The gfn_offset is guaranteed to be aligned to 64, but the base_gfn
> * of memslot has no such restriction, so the range can cross two large
> @@ -1411,7 +1384,16 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
> PG_LEVEL_2M);
> }
>
> - /* Now handle 4K PTEs. */
> + /*
> + * (Re)Enable dirty logging for all 4KiB SPTEs that map the GFNs in
> + * mask. If PML is enabled and the and the GFN doesn't need to be
> + * write-protected for other reasons, e.g. shadow paging, clear the
> + * Dirty bit. Otherwise clear the Writable bit.
> + *
> + * Note that kvm_mmu_clear_dirty_pt_masked() is called whenever PML is
> + * enabled but it chooses between clearing the Dirty bit and Writeable
> + * bit based on the context.
> + */
> if (kvm_x86_ops.cpu_dirty_log_size)
> kvm_mmu_clear_dirty_pt_masked(kvm, slot, gfn_offset, mask);
> else
Thanks for fixing this comment. Indeed it required to look a bit in code
if the condition means, PML is enabled, I faced this as well.
Reviewed-by: Pankaj Gupta <pankaj.gupta@....com>
>
> base-commit: 332d2c1d713e232e163386c35a3ba0c1b90df83f
Powered by blists - more mailing lists