lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 2 Dec 2021 14:38:05 -0800
From:   Dave Hansen <dave.hansen@...el.com>
To:     "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        Thomas Gleixner <tglx@...utronix.de>
Cc:     Kuppuswamy Sathyanarayanan 
        <sathyanarayanan.kuppuswamy@...ux.intel.com>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
        "Rafael J . Wysocki" <rjw@...ysocki.net>,
        "H . Peter Anvin" <hpa@...or.com>, Tony Luck <tony.luck@...el.com>,
        Dan Williams <dan.j.williams@...el.com>,
        Andi Kleen <ak@...ux.intel.com>,
        Kuppuswamy Sathyanarayanan <knsathya@...nel.org>,
        linux-kernel@...r.kernel.org, linux-acpi@...r.kernel.org
Subject: Re: [PATCH v2] x86: Skip WBINVD instruction for VM guest

On 12/2/21 2:21 PM, Kirill A. Shutemov wrote:
>   - NVDIMMs are not supported inside TDX. If it will change we would need
>     to deal with cache flushing for this case. Hopefully, we would be able
>     to avoid WBINVD.

Maybe we can use this as an example since we have our friendly NVDIMM
developers on cc already.

Let's say that tomorrow Intel decides that NVDIMMs are OK to use in TDX.
 It might not be a good idea, but Intel could arbitrarily start
supporting them immediately.  Further, someone could take today's kernel
and stick it on some future, fancy platform which does support TDX and
NVDIMMs.  In other words, there are multiple reasons we can't just say
"TDX doesn't support NVDIMMs" and forget about it.

If either of those happened, we'd have a NVDIMM driver which uses
WBINVD, expects cache flushing and subsequently loses data.  I think we
can all agree that's a bad idea.

So, we've got two different cases that land in the #VE handler:

	1. Silly ACPI code that doesn't need WBINVD behavior

	2. Less silly NVDIMM code that badly needs WBINVD behavior

... but we have a #VE handler that can't tell the difference.

To me, that says we need to do _something_ different than just papering
over the WBINVD in the #VE handler.

Does anyone have a different take on it?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ