[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2776fced-54c2-40eb-7921-1c68236c7f70@redhat.com>
Date: Wed, 8 Apr 2020 00:07:22 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Andy Lutomirski <luto@...capital.net>,
Thomas Gleixner <tglx@...utronix.de>
Cc: Vivek Goyal <vgoyal@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Andy Lutomirski <luto@...nel.org>,
LKML <linux-kernel@...r.kernel.org>, X86 ML <x86@...nel.org>,
kvm list <kvm@...r.kernel.org>, stable <stable@...r.kernel.org>
Subject: Re: [PATCH v2] x86/kvm: Disable KVM_ASYNC_PF_SEND_ALWAYS
On 07/04/20 23:41, Andy Lutomirski wrote:
> 2. Access to bad memory results in #MC. Sure, #MC is a turd, but
> it’s an *architectural* turd. By all means, have a nice simple PV
> mechanism to tell the #MC code exactly what went wrong, but keep the
> overall flow the same as in the native case.
>
> I think I like #2 much better. It has another nice effect: a good
> implementation will serve as a way to exercise the #MC code without
> needing to muck with EINJ or with whatever magic Tony uses. The
> average kernel developer does not have access to a box with testable
> memory failure reporting.
I prefer #VE, but I can see how #MC has some appeal. However, #VE has a
mechanism to avoid reentrancy, unlike #MC. How would that be better
than the current mess with an NMI happening in the first few
instructions of the #PF handler?
Paolo
Powered by blists - more mailing lists