[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <0255CF03-D45D-45E0-BC61-79159B94ED44@amacapital.net>
Date: Tue, 7 Apr 2020 15:29:23 -0700
From: Andy Lutomirski <luto@...capital.net>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Vivek Goyal <vgoyal@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Andy Lutomirski <luto@...nel.org>,
LKML <linux-kernel@...r.kernel.org>, X86 ML <x86@...nel.org>,
kvm list <kvm@...r.kernel.org>, stable <stable@...r.kernel.org>
Subject: Re: [PATCH v2] x86/kvm: Disable KVM_ASYNC_PF_SEND_ALWAYS
> On Apr 7, 2020, at 3:07 PM, Paolo Bonzini <pbonzini@...hat.com> wrote:
>
> On 07/04/20 23:41, Andy Lutomirski wrote:
>> 2. Access to bad memory results in #MC. Sure, #MC is a turd, but
>> it’s an *architectural* turd. By all means, have a nice simple PV
>> mechanism to tell the #MC code exactly what went wrong, but keep the
>> overall flow the same as in the native case.
>>
>> I think I like #2 much better. It has another nice effect: a good
>> implementation will serve as a way to exercise the #MC code without
>> needing to muck with EINJ or with whatever magic Tony uses. The
>> average kernel developer does not have access to a box with testable
>> memory failure reporting.
>
> I prefer #VE, but I can see how #MC has some appeal. However, #VE has a
> mechanism to avoid reentrancy, unlike #MC. How would that be better
> than the current mess with an NMI happening in the first few
> instructions of the #PF handler?
>
>
It has to be an IST vector due to the possibility of hitting a memory failure right after SYSCALL. I imagine that making #VE use IST would be unfortunate.
I think #MC has a mechanism to prevent reentrancy to a limited extent. How does #VE avoid reentrancy?
Powered by blists - more mailing lists