lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210827183923.xvgeml3etyb2glv6@amd.com>
Date:   Fri, 27 Aug 2021 13:39:23 -0500
From:   Michael Roth <michael.roth@....com>
To:     Brijesh Singh <brijesh.singh@....com>
Cc:     Borislav Petkov <bp@...en8.de>, x86@...nel.org,
        linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
        linux-efi@...r.kernel.org, platform-driver-x86@...r.kernel.org,
        linux-coco@...ts.linux.dev, linux-mm@...ck.org,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Joerg Roedel <jroedel@...e.de>,
        Tom Lendacky <thomas.lendacky@....com>,
        "H. Peter Anvin" <hpa@...or.com>, Ard Biesheuvel <ardb@...nel.org>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Sean Christopherson <seanjc@...gle.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Andy Lutomirski <luto@...nel.org>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Sergio Lopez <slp@...hat.com>, Peter Gonda <pgonda@...gle.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
        David Rientjes <rientjes@...gle.com>,
        Dov Murik <dovmurik@...ux.ibm.com>,
        Tobin Feldman-Fitzthum <tobin@....com>,
        Vlastimil Babka <vbabka@...e.cz>,
        "Kirill A . Shutemov" <kirill@...temov.name>,
        Andi Kleen <ak@...ux.intel.com>, tony.luck@...el.com,
        marcorr@...gle.com, sathyanarayanan.kuppuswamy@...ux.intel.com
Subject: Re: [PATCH Part1 v5 32/38] x86/sev: enable SEV-SNP-validated CPUID
 in #VC handlers

On Fri, Aug 27, 2021 at 10:47:42AM -0500, Brijesh Singh wrote:
> 
> On 8/27/21 10:18 AM, Borislav Petkov wrote:
> > On Fri, Aug 20, 2021 at 10:19:27AM -0500, Brijesh Singh wrote:
> >> From: Michael Roth <michael.roth@....com>
> >>
> >> This adds support for utilizing the SEV-SNP-validated CPUID table in
> > s/This adds support for utilizing/Utilize/
> >
> > Yap, it can really be that simple. :)
> >
> >> the various #VC handler routines used throughout boot/run-time. Mostly
> >> this is handled by re-using the CPUID lookup code introduced earlier
> >> for the boot/compressed kernel, but at various stages of boot some work
> >> needs to be done to ensure the CPUID table is set up and remains
> >> accessible throughout. The following init routines are introduced to
> >> handle this:
> > Do not talk about what your patch does - that should hopefully be
> > visible in the diff itself. Rather, talk about *why* you're doing what
> > you're doing.
> >
> >> sev_snp_cpuid_init():
> > This one is not really introduced - it is already there.
> >
> > <snip all the complex rest>
> >
> > So this patch is making my head spin. It seems we're dancing a lot of
> > dance just to have our CPUID page present at all times. Which begs the
> > question: do we need it during the whole lifetime of the guest?
> 
> Mike can correct me,  we need it for entire lifetime of the guest. 
> Whenever guest needs the CPUID value, the #VC handler will refer to this
> page.

That's right, and cpuid instructions can get introduced at pretty much
every stage of the boot process.

> 
> 
> > Regardless, I think this can be simplified by orders of
> > magnitude if we allocated statically 4K for that CPUID page in
> > arch/x86/boot/compressed/mem_encrypt.S, copied the supplied CPUID page
> > from the firmware to it and from now on, work with our own copy.
> 
> Actually a  VMM could populate more than one page for the CPUID. One
> page can include 64 entries and I believe Mike is already running into
> limits (with Qemu) and exploring the ideas to extend it more than a page.

I added the range checks in this version so that a hypervisor can still
leave out all-zero entries, so I think it can be avoided near-term at
least, but yes, still a possibility we might need an extra one in the
future, not sure how scarce storage is for stuff like __ro_after_init, so
worth considering.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ