[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <247d9a25-f32f-d01b-61ff-b1966e382907@kernel.org>
Date: Tue, 18 May 2021 10:10:55 -0700
From: Andy Lutomirski <luto@...nel.org>
To: "Bae, Chang Seok" <chang.seok.bae@...el.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>, Borislav Petkov <bp@...e.de>,
X86 ML <x86@...nel.org>,
Herbert Xu <herbert@...dor.apana.org.au>,
"Williams, Dan J" <dan.j.williams@...el.com>,
"Hansen, Dave" <dave.hansen@...el.com>,
"Shankar, Ravi V" <ravi.v.shankar@...el.com>,
Linux Crypto Mailing List <linux-crypto@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH v2 00/11] x86: Support Intel Key Locker
On 5/17/21 11:21 AM, Bae, Chang Seok wrote:
> On May 15, 2021, at 11:01, Andy Lutomirski <luto@...nel.org> wrote:
>> On 5/14/21 1:14 PM, Chang S. Bae wrote:
>>> Key Locker [1][2] is a new security feature available in new Intel CPUs to
>>> protect data encryption keys for the Advanced Encryption Standard
>>> algorithm. The protection limits the amount of time an AES key is exposed
>>> in memory by sealing a key and referencing it with new AES instructions.
>>>
>>> The new AES instruction set is a successor of Intel's AES-NI (AES New
>>> Instruction). Users may switch to the Key Locker version from crypto
>>> libraries. This series includes a new AES implementation for the Crypto
>>> API, which was validated through the crypto unit tests. The performance in
>>> the test cases was measured and found comparable to the AES-NI version.
>>>
>>> Key Locker introduces a (CPU-)internal key to encode AES keys. The kernel
>>> needs to load it and ensure it unchanged as long as CPUs are operational.
>>
>> I have high-level questions:
>>
>> What is the expected use case?
>
> The wrapping key here is only used for new AES instructions.
>
> I’m aware of their potential use cases for encrypting file system or disks.
I would like to understand what people are actually going to do with
this. Give me a user story or two, please. If it turns out to be
useless, I would rather not merge it.
>
>> I certainly understand how KL is valuable in a context where
>> a verified boot process installs some KL keys that are not subsequently
>> accessible outside the KL ISA, but Linux does not really work like this.
>
> Do you mind elaborating on the concern? I try to understand any issue with
> PATCH3 [1], specifically.
My concern has nothing to do with your patches per se.
I want to understand the entire workflow that makes Key Locker safer
than not using Key Locker. Something like:
Step 1: Computer is powered on.
Step 2: Boot loader loads Linux
Step 3: Linux does such-and-such
Step 4: Attacker compromises the computer in the following way
and an explanation of why this is realistic and how Key Locker helps
would be nice.
>> What is the expected interaction between a KL-using VM guest and the
>> host VMM? Will there be performance impacts (to context switching, for
>> example) if a guest enables KL, even if the guest does not subsequently
>> do anything with it? Should Linux actually enable KL if it detects that
>> it's a VM guest? Should Linux have use a specific keying method as a guest?
>
> First of all, there is an RFC series for KVM [2].
>
> Each CPU has one internal key state so it needs to reload it between guest and
> host if both are enabled. The proposed approach enables it exclusively; expose
> it to guests only when disabled in a host. Then, I guess a guest may enable it.
I read that series. This is not a good solution.
I can think of at least a few reasonable ways that a host and a guest
can cooperate to, potentially, make KL useful.
a) Host knows that the guest will never migrate, and guest delegates
IWKEY management to the host. The host generates a random key and does
not permit the guest to use LOADIWKEY. The guest shares the random key
with the host. Of course, this means that a host key handle that leaks
to a guest can be used within the guest.
b) Host may migrate the guest. Guest delegates IWKEY management to the
host, and the host generates and remembers a key for the guest. On
migration, the host forwards the key to the new host. The host can
still internally any type of key, but context switches may be quite slow.
c) Guest wants to manage its own non-random key. Host lets it and
context switches it.
d) Guest does not need KL and leaves CR4.KL clear. Host does whatever
it wants with no overhead.
All of these have tradeoffs.
My current thought is that, if Linux is going to support Key Locker,
then this all needs to be explicitly controlled. On initial boot, Linux
should not initialize Key Locker. Upon explicit administrator request
(via sysfs?), Linux will initialize Key Locker in the mode requested by
the administrator. Modes could include:
native_random_key: Use a random key per the ISA.
native_kernel_key_remember: Use a random key but load it as a non-random
key. Remember the key in kernel memory and use it for S3 resume, etc.
native_kernel_key_backup: Use a random key, put it in the backup
storage, and forget it. Use the backup for resume, etc.
native_kernel_key_norestore: Use a random key. The key is lost on any
power transition that forgets the key. Backup is not used.
paravirt_any: Ask the hypervisor to handle keying. Any mechanism is
acceptable.
paravirt_random: Ask the hypervisor for a random key. Only succeeds if
we get an actual random key.
Does this make sense?
Powered by blists - more mailing lists