[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b3c65f9d-5fd3-22c5-cd23-481774d92222@linux.ibm.com>
Date: Fri, 20 Aug 2021 21:36:46 +0300
From: Dov Murik <dovmurik@...ux.ibm.com>
To: Andrew Scull <ascull@...gle.com>, Ard Biesheuvel <ardb@...nel.org>
Cc: linux-efi <linux-efi@...r.kernel.org>,
Borislav Petkov <bp@...e.de>,
Ashish Kalra <ashish.kalra@....com>,
Brijesh Singh <brijesh.singh@....com>,
Tom Lendacky <thomas.lendacky@....com>,
James Morris <jmorris@...ei.org>,
"Serge E. Hallyn" <serge@...lyn.com>,
Andi Kleen <ak@...ux.intel.com>,
"Dr. David Alan Gilbert" <dgilbert@...hat.com>,
James Bottomley <jejb@...ux.ibm.com>,
Tobin Feldman-Fitzthum <tobin@...ux.ibm.com>,
Jim Cadden <jcadden@....com>, linux-coco@...ts.linux.dev,
linux-security-module@...r.kernel.org,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Dov Murik <dovmurik@...ux.ibm.com>
Subject: Re: [PATCH 3/3] virt: Add sev_secret module to expose confidential
computing secrets
On 19/08/2021 16:02, Andrew Scull wrote:
> On Mon, 16 Aug 2021 at 10:57, Ard Biesheuvel <ardb@...nel.org> wrote:
>>
>> On Fri, 13 Aug 2021 at 15:05, Andrew Scull <ascull@...gle.com> wrote:
>>>
>>> On Mon, Aug 09, 2021 at 07:01:57PM +0000, Dov Murik wrote:
[...]
>>>
>>>> +static int sev_secret_unlink(struct inode *dir, struct dentry *dentry)
>>>> +{
>>>> + struct sev_secret *s = sev_secret_get();
>>>> + struct inode *inode = d_inode(dentry);
>>>> + struct secret_entry *e = (struct secret_entry *)inode->i_private;
>>>> + int i;
>>>> +
>>>> + if (e) {
>>>> + /* Zero out the secret data */
>>>> + memzero_explicit(e->data, secret_entry_data_len(e));
>>>
>>> Would there be a benefit in flushing these zeros?
>>>
>>
>> Do you mean cache clean+invalidate? Better to be precise here.
>
> At least a clean, to have the zeros written back to memory from the
> cache, in order to overwrite the secret.
>
I agree, but not sure how to implement this:
I see there's an arch_wb_cache_pmem exported function which internally
(in arch/x86/lib/usercopy_64.c) calls clean_cache_range which seems to
do what we want (assume the secret can be longer than the cache line).
But arch_wb_cache_pmem is declared in include/linux/libnvdimm.h and
guarded with #ifdef CONFIG_ARCH_HAS_PMEM_API -- both seem not related to
what I'm trying to do.
I see there's an exported clflush_cache_range for x86 -- but that's a
clean+flush if I understand correctly.
Suggestions on how to approach? I can copy the clean_cache_range
implementation into the sev_secret module but hopefully there's a better
way to reuse. Maybe export clean_cache_range in x86?
Since this is for SEV the solution can be x86-specific, but if there's a
generic way I guess it's better (I think all of sev_secret module
doesn't have x86-specific stuff).
-Dov
>>
>>>> + e->guid = NULL_GUID;
>>>> + }
>>>> +
>>>> + inode->i_private = NULL;
>>>> +
>>>> + for (i = 0; i < SEV_SECRET_NUM_FILES; i++)
>>>> + if (s->fs_files[i] == dentry)
>>>> + s->fs_files[i] = NULL;
>>>> +
>>>> + /*
>>>> + * securityfs_remove tries to lock the directory's inode, but we reach
>>>> + * the unlink callback when it's already locked
>>>> + */
>>>> + inode_unlock(dir);
>>>> + securityfs_remove(dentry);
>>>> + inode_lock(dir);
>>>> +
>>>> + return 0;
>>>> +}
Powered by blists - more mailing lists