[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZIwUO/8W7dRYt2s8@yzhao56-desk.sh.intel.com>
Date: Fri, 16 Jun 2023 15:50:19 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: Yuan Yao <yuan.yao@...ux.intel.com>
CC: <kvm@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<pbonzini@...hat.com>, <seanjc@...gle.com>, <chao.gao@...el.com>,
<kai.huang@...el.com>, <robert.hoo.linux@...il.com>
Subject: Re: [PATCH v3 09/11] KVM: x86/mmu: serialize vCPUs to zap gfn when
guest MTRRs are honored
On Fri, Jun 16, 2023 at 04:09:17PM +0800, Yuan Yao wrote:
> On Fri, Jun 16, 2023 at 03:37:29PM +0800, Yan Zhao wrote:
> > On Fri, Jun 16, 2023 at 03:45:50PM +0800, Yuan Yao wrote:
> > > > +/*
> > > > + * Add @range into kvm->arch.mtrr_zap_list and sort the list in
> > > > + * "length" ascending + "start" descending order, so that
> > > > + * ranges consuming more zap cycles can be dequeued later and their
> > > > + * chances of being found duplicated are increased.
> > > > + */
> > > > +static void kvm_add_mtrr_zap_list(struct kvm *kvm, struct mtrr_zap_range *range)
> > > > +{
> > > > + struct list_head *head = &kvm->arch.mtrr_zap_list;
> > > > + u64 len = range->end - range->start;
> > > > + struct mtrr_zap_range *cur, *n;
> > > > + bool added = false;
> > > > +
> > > > + spin_lock(&kvm->arch.mtrr_zap_list_lock);
> > > > +
> > > > + if (list_empty(head)) {
> > > > + list_add(&range->node, head);
> > > > + spin_unlock(&kvm->arch.mtrr_zap_list_lock);
> > > > + return;
> > > > + }
> > > > +
> > > > + list_for_each_entry_safe(cur, n, head, node) {
> > > > + u64 cur_len = cur->end - cur->start;
> > > > +
> > > > + if (len < cur_len)
> > > > + break;
> > > > +
> > > > + if (len > cur_len)
> > > > + continue;
> > > > +
> > > > + if (range->start > cur->start)
> > > > + break;
> > > > +
> > > > + if (range->start < cur->start)
> > > > + continue;
> > > > +
> > > > + /* equal len & start, no need to add */
> > > > + added = true;
> > >
> > > Possible/worth to ignore the range already covered
> > > by queued range ?
> >
> > I may not get you correctly, but
> > the "added" here means an queued range with exactly same start + len
> > found, so free and drop adding the new range here.
>
> I mean drop adding three B below if A already in the queue:
>
> |------A--------|
> |----B----|
>
> |------A--------|
> |----B----|
>
> |------A--------|
> |----B----|
>
Oh, I implemented this way in my first version.
But it will complicate the logic and increase time holding spinlock.
And as usually in the zaps caused by MTRRs update and CR0.CD toggles,
the queued ranges are duplicated in different vCPUs and non-overlapping
in one vCPU, I turned to this simplier implemenation finally.
Powered by blists - more mailing lists