lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 6 Dec 2021 08:39:53 -0800
From:   Dan Williams <dan.j.williams@...el.com>
To:     Dave Hansen <dave.hansen@...el.com>
Cc:     "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Kuppuswamy Sathyanarayanan 
        <sathyanarayanan.kuppuswamy@...ux.intel.com>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        X86 ML <x86@...nel.org>,
        "Rafael J . Wysocki" <rjw@...ysocki.net>,
        "H . Peter Anvin" <hpa@...or.com>, Tony Luck <tony.luck@...el.com>,
        Andi Kleen <ak@...ux.intel.com>,
        Kuppuswamy Sathyanarayanan <knsathya@...nel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linux ACPI <linux-acpi@...r.kernel.org>
Subject: Re: [PATCH v2] x86: Skip WBINVD instruction for VM guest

On Mon, Dec 6, 2021 at 7:35 AM Dave Hansen <dave.hansen@...el.com> wrote:
>
> On 12/3/21 4:54 PM, Kirill A. Shutemov wrote:
> > On Fri, Dec 03, 2021 at 04:20:34PM -0800, Dave Hansen wrote:
> >>> TDX doesn't support these S- and C-states. TDX is only supports S0 and S5.
> >>
> >> This makes me a bit nervous.  Is this "the first TDX implementation
> >> supports..." or "the TDX architecture *prohibits* supporting S1 (or
> >> whatever"?
> >
> > TDX Virtual Firmware Design Guide only states that "ACPI S3 (not supported
> > by TDX guests)".
> >
> > Kernel reports in dmesg "ACPI: PM: (supports S0 S5)".
>
> Those describe the current firmware implementation, not a guarantee
> provided by the TDX architecture forever.
>
> > But I don't see how any state beyond S0 and S5 make sense in TDX context.
> > Do you?
>
> Do existing (non-TDX) VMs use anything other than S0 and S5?  If so, I'd
> say yes.
>
> >> I really think we need some kind of architecture guarantee.  Without
> >> that, we risk breaking things if someone at our employer simply changes
> >> their mind.
> >
> > Guarantees are hard.
> >
> > If somebody change their mind we will get unexpected #VE and crash.
> > I think it is acceptable way to handle unexpected change in confidential
> > computing environment.
>
> Architectural guarantees are quite easy, actually.  They're just a
> contract that two parties agree to.  In this case, the contract would be
> that TDX firmware *PROMISES* not to enumerate support for additional
> sleep states over what the implementation does today.  If future
> firmware breaks that promise (and the kernel crashes) we get to come
> after them with torches and pitchforks to fix the firmware.
>
> The contract let's us do things in the OS like:
>
>         WARN_ON(sleep_states[ACPI_STATE_S3]);
>
> We also don't need *formal* documentation of such things.  We really
> just need to have a chat.
>
> It would be perfectly sufficient if we go bug Intel's TDX architecture
> folks and say, "Hey, Linux is going to crash if you ever implement any
> actual sleep states.  The current implementation is fine here, but is it
> OK if future implementations are restricted from doing this?"
>
> But, the trick is that we need a contract.  A contract requires a
> "meeting of the minds" first.

The WBINVD requirement in sleep states is about getting cache contents
out to to power preserved domain before the CPU turns off. The bare
metal host handles that requirement. The conversation that needs to be
had is with the ACPI specification committee to clarify that virtual
machines have no responsibility to flush caches. We can do that as a
Code First proposal to the ACPI Specification Working Group.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ