[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5b1699b5-6766-b5c8-fe1f-faf5a9b7c97e@intel.com>
Date: Thu, 25 Jun 2020 14:37:57 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: Borislav Petkov <bp@...en8.de>,
Daniel Gutson <daniel@...ypsium.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>, Arnd Bergmann <arnd@...db.de>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Peter Zijlstra <peterz@...radead.org>,
"David S. Miller" <davem@...emloft.net>,
Rob Herring <robh@...nel.org>, Tony Luck <tony.luck@...el.com>,
Rahul Tanwar <rahul.tanwar@...ux.intel.com>,
Xiaoyao Li <xiaoyao.li@...el.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
linux-kernel@...r.kernel.org, Richard Hughes <hughsient@...il.com>
Subject: Re: [PATCH] Ability to read the MKTME status from userspace (patch
v2)
On 6/25/20 2:27 PM, Borislav Petkov wrote:
> On Thu, Jun 25, 2020 at 06:16:12PM -0300, Daniel Gutson wrote:
>> What didn't become clear from the thread last time is the direction to
>> proceed. Concrete suggestion?
> Here are two:
>
> https://lkml.kernel.org/r/20200619161752.GG32683@zn.tnic
> https://lkml.kernel.org/r/20200619161026.GF32683@zn.tnic
>
> but before that happens, I'd like to hear Dave confirm that when we
> expose all that information to userspace, it will actually be true and
> show the necessary bits which *actually* tell you that encryption is
> enabled.
>
> If you're still unclear, go over the thread again pls.
It boils down to this: we shouldn't expose low-level, vendor-specific
implementation details if we can avoid it. Let's expose something that
app can actually use.
Something that will work for all of the TME, MKTME and SEV platforms
that I know of and continue to work for a while would be to have a
per-numa-node (/sys/devices/system/node[X]/file) that says: "user data
on this node is protected by memory encryption".
SEV guests would always have a 1 in all nodes.
TME systems with no platform screwiness like PMEM would always have a 1.
Old systems would have a 0 in there.
TME systems which also have PMEM-only nodes would set 0 in PMEM nodes
and 1 on DRAM nodes.
Systems with screwy EFI_MEMORY_CPU_CRYPTO mixing within NUMA nodes would
turn it off for the screwy nodes.
Is that concrete enough?
Powered by blists - more mailing lists