[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z4VD3AaQskK7IkYU@google.com>
Date: Mon, 13 Jan 2025 08:48:28 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Yan Zhao <yan.y.zhao@...el.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Peter Xu <peterx@...hat.com>, Maxim Levitsky <mlevitsk@...hat.com>
Subject: Re: [PATCH 4/5] KVM: Check for empty mask of harvested dirty ring
entries in caller
On Mon, Jan 13, 2025, Yan Zhao wrote:
> On Fri, Jan 10, 2025 at 05:04:08PM -0800, Sean Christopherson wrote:
> > @@ -163,14 +157,31 @@ int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring,
> > continue;
> > }
> > }
> > - kvm_reset_dirty_gfn(kvm, cur_slot, cur_offset, mask);
> > +
> > + /*
> > + * Reset the slot for all the harvested entries that have been
> > + * gathered, but not yet fully processed.
> > + */
> I really like the logs as it took me quite a while figuring out how this part of
> the code works :)
>
> Does "processed" mean the entries have been reset, and "gathered" means they've
> been read from the ring?
Yeah.
> I'm not sure, but do you like this version? e.g.
> "Combined reset of the harvested entries that can be identified by curr_slot
> plus cur_offset+mask" ?
I have no objection to documenting the mechanics *and* the high level intent,
but I definitely want to document the "what", not just the "how".
> > + if (mask)
> > + kvm_reset_dirty_gfn(kvm, cur_slot, cur_offset, mask);
> > +
> > + /*
> > + * The current slot was reset or this is the first harvested
> > + * entry, (re)initialize the metadata.
> > + */
> What about
> "Save the current slot and cur_offset (with mask initialized to 1) to check if
> any future entries can be found for a combined reset." ?
Hmm, what if I add a comment at the top to document the overall behavior and the
variables,
/*
* To minimize mmu_lock contention, batch resets for harvested entries
* whose gfns are in the same slot, and are within N frame numbers of
* each other, where N is the number of bits in an unsigned long. For
* simplicity, process the current set of entries when the next entry
* can't be included in the batch.
*
* Track the current batch slot, the gfn offset into the slot for the
* batch, and the bitmask of gfns that need to be reset (relative to
* offset). Note, the offset may be adjusted backwards, e.g. so that
* a sequence of gfns X, X-1, ... X-N can be batched.
*/
u32 cur_slot, next_slot;
u64 cur_offset, next_offset;
unsigned long mask = 0;
struct kvm_dirty_gfn *entry;
and then keep this as:
/*
* The current slot was reset or this is the first harvested
* entry, (re)initialize the batching metadata.
*/
>
> > cur_slot = next_slot;
> > cur_offset = next_offset;
> > mask = 1;
> > first_round = false;
> > }
> >
> > - kvm_reset_dirty_gfn(kvm, cur_slot, cur_offset, mask);
> > + /*
> > + * Perform a final reset if there are harvested entries that haven't
> > + * been processed. The loop only performs a reset when an entry can't
> > + * be coalesced, i.e. always leaves at least one entry pending.
> The loop only performs a reset when an entry can be coalesced?
No, if an entry can be coalesced then the loop doesn't perform a reset. Does
this read better?
/*
* Perform a final reset if there are harvested entries that haven't
* been processed, which is guaranteed if at least one harvested was
* found. The loop only performs a reset when the "next" entry can't
* be batched with "current" the entry(s), and that reset processes the
* _current_ entry(s), i.e. the last harvested entry, a.k.a. next, will
* will always be left pending.
*/
> > + */
> > + if (mask)
> > + kvm_reset_dirty_gfn(kvm, cur_slot, cur_offset, mask);
> >
> > /*
> > * The request KVM_REQ_DIRTY_RING_SOFT_FULL will be cleared
> > --
> > 2.47.1.613.gc27f4b7a9f-goog
> >
Powered by blists - more mailing lists