[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f4228262-02f9-3af5-8ef5-be109b5d3d13@amd.com>
Date: Fri, 6 Oct 2023 14:24:11 -0500
From: "Kalra, Ashish" <ashish.kalra@....com>
To: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"Rafael J. Wysocki" <rafael@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Adrian Hunter <adrian.hunter@...el.com>,
Kuppuswamy Sathyanarayanan
<sathyanarayanan.kuppuswamy@...ux.intel.com>,
Elena Reshetova <elena.reshetova@...el.com>,
Jun Nakajima <jun.nakajima@...el.com>,
Rick Edgecombe <rick.p.edgecombe@...el.com>,
Tom Lendacky <thomas.lendacky@....com>,
kexec@...ts.infradead.org, linux-coco@...ts.linux.dev,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 10/13] x86/tdx: Convert shared memory back to private on
kexec
On 10/5/2023 5:28 PM, Kirill A. Shutemov wrote:
> On Thu, Oct 05, 2023 at 05:01:23PM -0500, Kalra, Ashish wrote:
>> On 10/5/2023 4:28 PM, Kirill A. Shutemov wrote:
>>> On Thu, Oct 05, 2023 at 01:41:38PM -0500, Kalra, Ashish wrote:
>>>>> +static void unshare_all_memory(bool unmap)
>>>>> +{
>>>>> + unsigned long addr, end;
>>>>> + long found = 0, shared;
>>>>> +
>>>>> + /*
>>>>> + * Walk direct mapping and convert all shared memory back to private,
>>>>> + */
>>>>> +
>>>>> + addr = PAGE_OFFSET;
>>>>> + end = PAGE_OFFSET + get_max_mapped();
>>>>> +
>>>>> + while (addr < end) {
>>>>> + unsigned long size;
>>>>> + unsigned int level;
>>>>> + pte_t *pte;
>>>>> +
>>>>> + pte = lookup_address(addr, &level);
>>>>
>>>> IIRC, you were earlier walking the direct mapping using
>>>> walk_page_range_novma(), any particular reason to use lookup_address()
>>>> instead ?
>>>
>>> walk_page_range_novma() wants mmap lock to be taken, but it is tricky as
>>> we run here from atomic context in case of crash.
>>>
>>> I considered using trylock to bypass the limitation, but it is a hack.
>>>
>>>>
>>>>> + size = page_level_size(level);
>>>>> +
>>>>> + if (pte && pte_decrypted(*pte)) {
>>>>
>>>> Additionally need to add check for pte_none() here to handle physical memory
>>>> holes in direct mapping.
>>>
>>> lookup_address() returns NULL for none entries.
>>>
>>
>> Looking at lookup_address_in_pgd(), at pte level it is simply returning
>> pte_offset_kernel() and there does not seem to be a check for returning NULL
>> if pte_none() ?
>
> Hm. You are right.
>
> I think it yet another quirk in how lookup_address() implemented. We need
> to make it straight too.
>
> There's two options: either make lookup_address() return pointer for entry
> even if it is NULL, or add check for pte_none() after pte_offset_kernel()
> and return NULL if it is true.
>
> I like the first option more as it allows caller to populate the entry if
> it wants.
Yes, i like the first option.
>
>>>>> + int pages = size / PAGE_SIZE;
>>>>> +
>>>>> + /*
>>>>> + * Touching memory with shared bit set triggers implicit
>>>>> + * conversion to shared.
>>>>> + *
>>>>> + * Make sure nobody touches the shared range from
>>>>> + * now on.
>>>>> + *
>>>>> + * Bypass unmapping for crash scenario. Unmapping
>>>>> + * requires sleepable context, but in crash case kernel
>>>>> + * hits the code path with interrupts disabled.
>>>>
>>>> In case of SNP we will need to temporarily enable interrupts during this
>>>> unsharing as we invoke set_memory_encrypted() which then hits a BUG_ON() in
>>>> cpa_flush() if interrupts are disabled.
>>>
>>> Do you really need full set_memory_encrypted()? Can't you do something
>>> ligher?
>>>
>> We need to modify the PTE for setting c-bit to 1 so that will require
>> cpa_flush(), though probably can add something lighter to do
>> clflush_cache_range() directly ?
>
> For TDX, I don't touch shared bit as nobody suppose to touch the memory
> after the point (ans set_memory_np() enforces it for !crash case).
>
> Can't SNP do the same?
>
No, we need to make the PSC call for HV to update the RMP, then set
C-bit=1 in the PTE and then do a PVALIDATE to switch the page back to
private, so it needs something like a full set_memory_encrypted().
Thanks,
Ashish
Powered by blists - more mailing lists