lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180823152907.GA1488@linux.intel.com>
Date:   Thu, 23 Aug 2018 08:29:07 -0700
From:   Sean Christopherson <sean.j.christopherson@...el.com>
To:     Paolo Bonzini <pbonzini@...hat.com>
Cc:     Brijesh Singh <brijesh.singh@....com>,
        Borislav Petkov <bp@...e.de>,
        "x86@...nel.org" <x86@...nel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
        "Lendacky, Thomas" <Thomas.Lendacky@....com>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: Re: SEV guest regression in 4.18

On Thu, Aug 23, 2018 at 01:26:55PM +0200, Paolo Bonzini wrote:
> On 22/08/2018 22:11, Brijesh Singh wrote:
> > 
> > Yes, this is one of approach I have in mind. It will avoid splitting
> > the larger pages; I am thinking that early in boot code we can lookup
> > for this special section and decrypt it in-place and probably maps with
> > C=0. Only downside, it will increase data section footprint a bit
> > because we need to align this section to PM_SIZE.
> 
> If you can ensure it doesn't span a PMD, maybe it does not need to be
> aligned; you could establish a C=0 mapping of the whole 2M around it.

Wouldn't that result in exposing/leaking whatever code/data happened
to reside on the same 2M page (or corrupting it if the entire page
isn't decrypted)?  Or are you suggesting that we'd also leave the
encrypted mapping intact?  If it's the latter...

Does hardware include the C-bit in the cache tag?  I.e are the C=0 and
C=1 variations of the same PA treated as different cache lines?  If
so, we could also treat the unencrypted variation as a separate PA by
defining it to be (ACTUAL_PA | (1 << x86_phys_bits)), (re)adjusting
x86_phys_bits if necessary to get the kernel to allow the address.
init_memory_mapping() could then alias every PA with an unencrypted
VA mapping, which would allow the kernel to access any PA unencrypted
by using virt_to_phys() and phys_to_virt() to translate an encrypted
VA to an unencrypted VA.  It would mean doubling INIT_PGD_PAGE_COUNT,
but that'd be a one-time cost regardless of how many pages needed to
be accessed with C=0.

> Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ