[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <746497ff-992d-4659-aa32-a54c68ae83bf@oracle.com>
Date: Thu, 10 Mar 2022 13:07:33 -0500
From: Alejandro Jimenez <alejandro.j.jimenez@...cle.com>
To: Dave Hansen <dave.hansen@...el.com>, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com,
luto@...nel.org, peterz@...radead.org, x86@...nel.org,
linux-kernel@...r.kernel.org
Cc: thomas.lendacky@....com, brijesh.singh@....com,
kirill.shutemov@...ux.intel.com, hpa@...or.com,
pbonzini@...hat.com, seanjc@...gle.com, srutherford@...gle.com,
ashish.kalra@....com, darren.kenny@...cle.com,
venu.busireddy@...cle.com, boris.ostrovsky@...cle.com
Subject: Re: [RFC 0/3] Expose Confidential Computing capabilities on sysfs
On 3/9/2022 5:40 PM, Dave Hansen wrote:
> On 3/9/22 14:06, Alejandro Jimenez wrote:>
>> On EPYC Milan host:
>>
>> $ grep -r . /sys/kernel/mm/mem_encrypt/*
>> /sys/kernel/mm/mem_encrypt/c_bit_position:51
> Why on earth would we want to expose this to userspace?
>
>> /sys/kernel/mm/mem_encrypt/sev/nr_sev_asid:509
>> /sys/kernel/mm/mem_encrypt/sev/status:enabled
>> /sys/kernel/mm/mem_encrypt/sev/nr_asid_available:509
>> /sys/kernel/mm/mem_encrypt/sev_es/nr_sev_es_asid:0
>> /sys/kernel/mm/mem_encrypt/sev_es/status:enabled
>> /sys/kernel/mm/mem_encrypt/sev_es/nr_asid_available:509
>> /sys/kernel/mm/mem_encrypt/sme/status:active
> For all of this... What will userspace *do* with it?
In my case, this information was useful to know for debugging failures
when testing the various features (e.g. need to specify cbitpos property
on QEMU sev-guest object).
It helps get an account of what is currently supported/enabled/active on
the host/guest, given that some of these capabilities will interact with
other components and cause boot hangs or errors (e.g. AVIC+SME or
AVIC+SEV hangs at boot, SEV guests with some configurations need to
increase SWIOTLB limit).
The sysfs entry basically answers the questions in
https://github.com/AMDESE/AMDSEV#faq without needing to run
virsh/qmp-shell/rdmsr.
I am aware than having a new sysfs entry mostly to facilitate debugging
might not be warranted, so I have tagged this as an RFC to ask if others
working in this space have found additional use cases, or just want the
convenience of having the data for current and future CoCo features in a
single location.
>
> For nr_asid_available, I get it. It tells you how many guests you can
> still run. But, TDX will need the same logical thing. Should TDX hosts
> go looking for this in:
>
> /sys/kernel/mm/mem_encrypt/tdx/available_guest_key_ids
>
> ?
>
> If it's something that's common, it needs to be somewhere common.
I think it makes sense to have common attributes for all CoCo providers
under /sys/kernel/mm/mem_encrypt/. The various CoCo providers can create
entries under mem_encrypt/<feature> exposing the information relevant to
their specific features like these patches implement for the AMD case,
and populate or link the <common_attr> attribute with the appropriate value.
Then we can have:
/sys/kernel/mm/mem_encrypt/
-- common_attr
-- sme/
-- sev/
-- sev_es/
or:
/sys/kernel/mm/mem_encrypt/
-- common_attr
-- tdx/
Note that at any single time, we are only creating entries that are
applicable to the hardware we are running on, so there is not a mix of
tdx and sme/sev subdirs.
I suspect it will be difficult to agree on what is "common" or even a
descriptive name. Lets say this common attribute will be:
/sys/kernel/mm/mem_encrypt/common_key
Where common_key can represent AMD SEV ASIDs/AMD SEV-{ES,SNP} ASIDs, or
Intel TDX KeyIDs (private/shared), or s390x SEID (Secure Execution IDs),
or <insert relevant ARM CCA attribute>.
We can have a (probably long) discussion to agree on the above; this
patchset just attempts to provide a framework for registering different
providers, and implements the AMD current capabilities.
Thank you,
Alejandro
Powered by blists - more mailing lists