lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGdbjmKB9=pwv0jc3xa8sY+Toc4q=_U=DWKm7+vP3-CsCvbQ3A@mail.gmail.com>
Date: Fri, 17 Jan 2025 14:20:44 -0800
From: Kevin Loughlin <kevinloughlin@...gle.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>, linux-kernel@...r.kernel.org, 
	tglx@...utronix.de, mingo@...hat.com, bp@...en8.de, 
	dave.hansen@...ux.intel.com, x86@...nel.org, hpa@...or.com, 
	pbonzini@...hat.com, kai.huang@...el.com, ubizjak@...il.com, 
	dave.jiang@...el.com, jgross@...e.com, kvm@...r.kernel.org, 
	thomas.lendacky@....com, pgonda@...gle.com, sidtelang@...gle.com, 
	mizhang@...gle.com, rientjes@...gle.com, szy0127@...u.edu.cn
Subject: Re: [PATCH v2 2/2] KVM: SEV: Prefer WBNOINVD over WBINVD for cache
 maintenance efficiency

On Tue, Jan 14, 2025 at 8:13 AM Sean Christopherson <seanjc@...gle.com> wrote:
>
> On Tue, Jan 14, 2025, Kirill A. Shutemov wrote:
> > On Mon, Jan 13, 2025 at 10:47:57AM -0800, Kevin Loughlin wrote:
> > > On Fri, Jan 10, 2025 at 12:23 AM Kirill A. Shutemov
> > > <kirill.shutemov@...ux.intel.com> wrote:
> > > >
> > > > On Thu, Jan 09, 2025 at 10:55:33PM +0000, Kevin Loughlin wrote:
> > > > > @@ -710,6 +711,14 @@ static void sev_clflush_pages(struct page *pages[], unsigned long npages)
> > > > >       }
> > > > >  }
> > > > >
> > > > > +static void sev_wb_on_all_cpus(void)
>
> In anticipation of landing the targeted flushing optimizations[*] in the near-ish
> future, I would prefer to avoid the "on_all_cpus" aspect of the name.
>
> Maybe sev_writeback_caches(), to kinda sorta pair with sev_clflush_pages()?
> Definitely open to other names.
>
> [*] https://lore.kernel.org/all/20240411140445.1038319-1-szy0127@sjtu.edu.cn

Ack, will do.

> > > > > +{
> > > > > +     if (boot_cpu_has(X86_FEATURE_WBNOINVD))
> > > > > +             wbnoinvd_on_all_cpus();
> > > > > +     else
> > > > > +             wbinvd_on_all_cpus();
> > > >
> > > > I think the X86_FEATURE_WBNOINVD check should be inside wbnoinvd().
> > > > wbnoinvd() should fallback to WBINVD if the instruction is not supported
> > > > rather than trigger #UD.
> > >
> > > I debated this as well and am open to doing it that way. One argument
> > > against silently falling back to WBINVD within wbnoinvd() (in general,
> > > not referring to this specific case) is that frequent WBINVD can cause
> > > soft lockups, whereas WBNOINVD is much less likely to do so. As such,
> > > there are potentially use cases where falling back to WBINVD would be
> > > undesirable (and would potentially be non-obvious behavior to the
> > > programmer calling wbnoinvd()), hence why I left the decision outside
> > > wbnoinvd().
>
> Practically speaking, I highly doubt there will ever be a scenario where not
> falling back to WBINVD is a viable option.  The alternatives I see are:
>
>   1. Crash
>   2. Data Corruption
>   3. CLFLUSH{OPT}
>
> (1) and (2) are laughably bad, and (3) would add significant complexity in most
> situations (to enumerate all addresses), and would be outright impossible in some.
>
> And if someone opts for WBNOINVD, they darn well better have done their homework,
> because there should be precious few reasons to need to flush all caches, and as
> evidenced by lack of usage in the kernel, even fewer reasons to write back dirty
> data while preserving entries.  I don't think it's at all unreasonable to put the
> onus on future developers to look at the implementation of wbnoinvd() and
> understand the implications.

Yeah, you and Kirill have convinced me that falling back to WBINVD is
the way to go. I'll do that in v3 shortly here; thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ