[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZOk2HWEubJIRo1HN@google.com>
Date: Fri, 25 Aug 2023 16:15:41 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Yan Zhao <yan.y.zhao@...el.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
pbonzini@...hat.com, chao.gao@...el.com, kai.huang@...el.com,
robert.hoo.linux@...il.com, yuan.yao@...ux.intel.com
Subject: Re: [PATCH v4 11/12] KVM: x86/mmu: split a single gfn zap range when
guest MTRRs are honored
On Fri, Jul 14, 2023, Yan Zhao wrote:
> Split a single gfn zap range (specifially range [0, ~0UL)) to smaller
> ranges according to current memslot layout when guest MTRRs are honored.
>
> Though vCPUs have been serialized to perform kvm_zap_gfn_range() for MTRRs
> updates and CR0.CD toggles, contention caused rescheduling cost is still
> huge when there're concurrent page fault holding mmu_lock for read.
Unless the pre-check doesn't work for some reason, I definitely want to avoid
this patch. This is a lot of complexity that, IIUC, is just working around a
problem elsewhere in KVM.
> Split a single huge zap range according to the actual memslot layout can
> reduce unnecessary transversal and yielding cost in tdp mmu.
> Also, it can increase the chances for larger ranges to find existing ranges
> to zap in zap list.
>
> Signed-off-by: Yan Zhao <yan.y.zhao@...el.com>
> ---
Powered by blists - more mailing lists