[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fab36c45-3cdc-3ec0-a76d-4a2a2fbfdfc8@amd.com>
Date: Tue, 14 Apr 2020 15:18:36 -0500
From: Tom Lendacky <thomas.lendacky@....com>
To: Dave Hansen <dave.hansen@...el.com>,
Mike Stunes <mstunes@...are.com>,
Joerg Roedel <joro@...tes.org>
Cc: "x86@...nel.org" <x86@...nel.org>, "hpa@...or.com" <hpa@...or.com>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Hellstrom <thellstrom@...are.com>,
Jiri Slaby <jslaby@...e.cz>,
Dan Williams <dan.j.williams@...el.com>,
Juergen Gross <jgross@...e.com>,
Kees Cook <keescook@...omium.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"virtualization@...ts.linux-foundation.org"
<virtualization@...ts.linux-foundation.org>,
Joerg Roedel <jroedel@...e.de>
Subject: Re: [PATCH 40/70] x86/sev-es: Setup per-cpu GHCBs for the runtime
handler
On 4/14/20 3:16 PM, Tom Lendacky wrote:
>
>
> On 4/14/20 3:12 PM, Dave Hansen wrote:
>> On 4/14/20 1:04 PM, Tom Lendacky wrote:
>>>> set_memory_decrypted needs to check the return value. I see it
>>>> consistently return ENOMEM. I've traced that back to split_large_page
>>>> in arch/x86/mm/pat/set_memory.c.
>>>
>>> At that point the guest won't be able to communicate with the
>>> hypervisor, too. Maybe we should BUG() here to terminate further
>>> processing?
>>
>> Escalating an -ENOMEM into a crashed kernel seems a bit extreme.
>> Granted, the guest may be in an unrecoverable state, but the host
>> doesn't need to be too.
>>
>
> The host wouldn't be. This only happens in a guest, so it would be just
> causing the guest kernel to panic early in the boot.
And I should add that it would only impact an SEV-ES guest.
Thanks,
Tom
>
> Thanks,
> Tom
>
Powered by blists - more mailing lists