lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z1DSgmzo3sX0gWY3@google.com>
Date: Wed, 4 Dec 2024 14:06:58 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Kevin Loughlin <kevinloughlin@...gle.com>
Cc: Zheyun Shen <szy0127@...u.edu.cn>, thomas.lendacky@....com, pbonzini@...hat.com, 
	tglx@...utronix.de, kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 2/2] KVM: SVM: Flush cache only on CPUs running SEV guest

On Wed, Dec 04, 2024, Kevin Loughlin wrote:
> On Tue, Dec 3, 2024 at 4:27 PM Sean Christopherson <seanjc@...gle.com> wrote:
> > > @@ -2152,7 +2191,7 @@ void sev_vm_destroy(struct kvm *kvm)
> > >        * releasing the pages back to the system for use. CLFLUSH will
> > >        * not do this, so issue a WBINVD.
> > >        */
> > > -     wbinvd_on_all_cpus();
> > > +     sev_do_wbinvd(kvm);
> >
> > I am 99% certain this wbinvd_on_all_cpus() can simply be dropped.  sev_vm_destroy()
> > is called after KVM's mmu_notifier has been unregistered, which means it's called
> > after kvm_mmu_notifier_release() => kvm_arch_guest_memory_reclaimed().
> 
> I think we need a bit of rework before dropping it (which I propose we
> do in a separate series), but let me know if there's a mistake in my
> reasoning here...
> 
> Right now, sev_guest_memory_reclaimed() issues writebacks for SEV and
> SEV-ES guests but does *not* issue writebacks for SEV-SNP guests.
> Thus, I believe it's possible a SEV-SNP guest reaches sev_vm_destroy()
> with dirty encrypted lines in processor caches. Because SME_COHERENT
> doesn't guarantee coherence across CPU-DMA interactions (d45829b351ee
> ("KVM: SVM: Flush when freeing encrypted pages even on SME_COHERENT
> CPUs")), it seems possible that the memory gets re-allocated for DMA,
> written back from an (unencrypted) DMA, and then corrupted when the
> dirty encrypted version gets written back over that, right?
> 
> And potentially the same thing for why we can't yet drop the writeback
> in sev_flush_encrypted_page() without a bit of rework?

Argh, this last one probably does apply to SNP.  KVM requires SNP VMs to be backed
with guest_memfd, and flushing for that memory is handled by sev_gmem_invalidate().
But the VMSA is kernel allocated and so needs to be flushed manually.  On the plus
side, the VMSA flush shouldn't use WB{NO}INVD unless things go sideways, so trying
to optimize that path isn't worth doing.

> It's true that the SNP firmware will require WBINVD before
> SNP_DF_FLUSH [1], but I think we're only currently doing that when an
> ASID is recycled, *not* when an ASID is deactivated.
> 
> [1] https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/specifications/56860.pdf

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ