[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4fde4c953a4204e70d89f2c3dfd24eccdac0540f.camel@gmail.com>
Date: Sun, 18 May 2025 11:52:07 +0200
From: Francesco Lavra <francescolavra.fl@...il.com>
To: seanjc@...gle.com
Cc: airlied@...il.com, bp@...en8.de, dave.hansen@...ux.intel.com,
dri-devel@...ts.freedesktop.org, kai.huang@...el.com,
kevinloughlin@...gle.com, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, maarten.lankhorst@...ux.intel.com,
mingo@...hat.com, mizhang@...gle.com, mripard@...nel.org,
pbonzini@...hat.com, simona@...ll.ch, szy0127@...u.edu.cn,
tglx@...utronix.de, thomas.lendacky@....com, tzimmermann@...e.de,
x86@...nel.org
Subject: Re: [PATCH v2 5/8] KVM: SEV: Prefer WBNOINVD over WBINVD for cache
maintenance efficiency
On 2025-05-16 at 21:28, Sean Christopherson wrote:
> @@ -3901,7 +3908,7 @@ void sev_snp_init_protected_guest_state(struct
> kvm_vcpu *vcpu)
> * From this point forward, the VMSA will always be a guest-
> mapped page
> * rather than the initial one allocated by KVM in svm-
> >sev_es.vmsa. In
> * theory, svm->sev_es.vmsa could be free'd and cleaned up here,
> but
> - * that involves cleanups like wbinvd_on_all_cpus() which would
> ideally
> + * that involves cleanups like flushing caches, which would
> ideally be
> * be handled during teardown rather than guest boot. Deferring
> that
Duplicate "be"
Powered by blists - more mailing lists