[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZPWXu4RBjJgiYYjo@yzhao56-desk.sh.intel.com>
Date: Mon, 4 Sep 2023 16:39:23 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: Sean Christopherson <seanjc@...gle.com>
CC: <kvm@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<pbonzini@...hat.com>, <chao.gao@...el.com>, <kai.huang@...el.com>,
<robert.hoo.linux@...il.com>, <yuan.yao@...ux.intel.com>
Subject: Re: [PATCH v4 11/12] KVM: x86/mmu: split a single gfn zap range when
guest MTRRs are honored
On Fri, Aug 25, 2023 at 04:15:41PM -0700, Sean Christopherson wrote:
> On Fri, Jul 14, 2023, Yan Zhao wrote:
> > Split a single gfn zap range (specifially range [0, ~0UL)) to smaller
> > ranges according to current memslot layout when guest MTRRs are honored.
> >
> > Though vCPUs have been serialized to perform kvm_zap_gfn_range() for MTRRs
> > updates and CR0.CD toggles, contention caused rescheduling cost is still
> > huge when there're concurrent page fault holding mmu_lock for read.
>
> Unless the pre-check doesn't work for some reason, I definitely want to avoid
> this patch. This is a lot of complexity that, IIUC, is just working around a
> problem elsewhere in KVM.
>
I think so too.
Powered by blists - more mailing lists