lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87y2q5ay8q.fsf@vitty.brq.redhat.com>
Date:   Wed, 06 May 2020 17:17:57 +0200
From:   Vitaly Kuznetsov <vkuznets@...hat.com>
To:     Vivek Goyal <vgoyal@...hat.com>
Cc:     x86@...nel.org, kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        Paolo Bonzini <pbonzini@...hat.com>,
        Andy Lutomirski <luto@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        "H. Peter Anvin" <hpa@...or.com>,
        Sean Christopherson <sean.j.christopherson@...el.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>
Subject: Re: [PATCH RFC 1/6] Revert "KVM: async_pf: Fix #DF due to inject "Page not Present" and "Page Ready" exceptions simultaneously"

Vivek Goyal <vgoyal@...hat.com> writes:

> On Wed, Apr 29, 2020 at 11:36:29AM +0200, Vitaly Kuznetsov wrote:
>> Commit 9a6e7c39810e (""KVM: async_pf: Fix #DF due to inject "Page not
>> Present" and "Page Ready" exceptions simultaneously") added a protection
>> against 'page ready' notification coming before 'page not ready' is
>> delivered.
>
> Hi Vitaly,
>
> Description of the commit seems to suggest that it is solving double
> fault issue. That is both "page not present" and "page ready" exceptions
> got queued and on next vcpu entry, it will result in double fault.
>
> It does not seem to solve the issue of "page not ready" being delivered
> before "page ready". That guest can handle already and its not an issue.
>
>> This situation seems to be impossible since commit 2a266f23550b
>> ("KVM MMU: check pending exception before injecting APF) which added
>> 'vcpu->arch.exception.pending' check to kvm_can_do_async_pf.
>
> This original commit description is confusing too. It says.
>
> "For example, when two APF's for page ready happen after one exit and
>  the first one becomes pending, the second one will result in #DF.
>  Instead, just handle the second page fault synchronously."
>
> So it seems to be trying to protect against that two "page ready"
> exceptions don't get queued simultaneously. But you can't fall back
> to synchronous mechanism once you have started the async pf prototocol.
> Once you have started async page fault protocol by sending "page not
> reay", you have to send "page ready". So I am not sure how above commit
> solved the issue of two "page ready" not being queued at the same time.
>
> I am wondering what problem did this commit solve. It looks like it
> can avoid queueing two "page not ready" events. But can that even happen?
>
>> 
>> On x86, kvm_arch_async_page_present() has only one call site:
>> kvm_check_async_pf_completion() loop and we only enter the loop when
>> kvm_arch_can_inject_async_page_present(vcpu) which when async pf msr
>> is enabled, translates into kvm_can_do_async_pf().
>
> kvm_check_async_pf_completion() skips injecting "page ready" if fault
> can't be injected now. Does that mean we leave it queued and it will
> be injected after next exit.
>
> If yes, then previous commit kind of makes sense. When it will not
> queue up two exceptions at the same time but will wait for queuing
> up the exception after next exit. But commit description still seems
> to be wrong in the sense it is not falling back to synchronous page
> fault for "page ready" events.

As I'll be dropping 'page ready' event delivery via #PF this doesn't
really matter much but after staring at the code for a while I managed
to convince myself that this particular code patch is just unreachable.
I don't think we can get #DF now with two 'page not ready' exceptions,
kvm_can_do_async_pf() should now allow that.

>
> try_async_pf() also calls kvm_can_do_async_pf(). And IIUC, it will
> fall back to synchrounous fault if injecting async_pf is not possible
> at this point of time. So that means despite the fact that async pf
> is enabled, all the page faults might not take that route and some
> will fall back to synchrounous faults. 

This sounds correct.

> I am concerned that how will
> this work for reporting errors back to guest (for virtiofs use case).
> If we are relying on async pf mechanism to also be able to report
> errors back, then we can't afford to do synchrounous page faults becase
> we don't have a way to report errors back to guest and it will hang (if
> page can't be faulted in).

Currently, if 'page not ready' event was delivered then we can't skip
'page ready' event, we'll have to deliver it. 'Page not ready' is still
going to be delivered as exception and unless we really need to solve
complex problems like delivering nested exception this should work.

>
> So either we need a way to report errors back while doing synchrounous
> page faults or we can't fall back to synchorounous page faults while
> async page faults are enabled.
>
> While we are reworking async page mechanism, want to make sure that
> error reporting part has been taken care of as part of design. Don't
> want to be dealing with it after the fact.

The main issue I'm seeing here is that we'll need to deliver these
errors 'right now' and not some time later. Generally, exceptions
(e.g. #VE) should work but there are some corner cases, I remember Paolo
and Andy discussing these (just hoping they'll jump in with their
conclusions :-). If we somehow manage to exclude interrupts-disabled
context from our scope we should be good, I don't see reasons to skip
delivering #VE there.

For the part this series touches, "page ready" notifications, we don't
skip them but at the same time there is no timely delivery guarantee, we
just queue an interrupt. I'm not sure you'll need these for virtio-fs
though.

Thanks for the feedback!

-- 
Vitaly

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ