lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 25 Oct 2022 23:07:10 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     Paolo Bonzini <pbonzini@...hat.com>
Cc:     Christian Borntraeger <borntraeger@...ux.ibm.com>,
        Emanuele Giuseppe Esposito <eesposit@...hat.com>,
        kvm@...r.kernel.org, Jonathan Corbet <corbet@....net>,
        Maxim Levitsky <mlevitsk@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        David Hildenbrand <david@...hat.com>, x86@...nel.org,
        "H. Peter Anvin" <hpa@...or.com>, linux-doc@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/4] KVM: API to block and resume all running vcpus in a
 vm

On Tue, Oct 25, 2022, Paolo Bonzini wrote:
> On 10/25/22 17:55, Sean Christopherson wrote:
> > On Tue, Oct 25, 2022, Paolo Bonzini wrote:
> >    - to prevent _all_ ioctls() because it's not just KVM_RUN that consumes memslots
> 
> This is perhaps an occasion to solve another disagreement: I still think
> that accessing memory outside KVM_RUN (for example KVM_SET_NESTED_STATE
> loading the APICv pages from VMCS12) is a bug, on the other hand we
> disagreed on that and you wanted to kill KVM_REQ_GET_NESTED_STATE_PAGES.

I don't think it's realistic to make accesses outside of KVM_RUN go away, e.g.
see the ARM ITS discussion in the dirty ring thread.  kvm_xen_set_evtchn() also
explicitly depends on writing guest memory without going through KVM_RUN (and
apparently can be invoked from a kernel thread?!?).

In theory, I do actually like the idea of restricting memory access to KVM_RUN,
but in reality I just think that forcing everything into KVM_RUN creates far more
problems than it solves.  E.g. my complaint with KVM_REQ_GET_NESTED_STATE_PAGES
is that instead of syncrhonously telling userspace it has a problem, KVM chugs
along as if everything is fine and only fails at later point in time.  I doubt
userspace would actually do anything differently, i.e. the VM is likely hosed no
matter what, but deferring work adds complexity in KVM and makes it more difficult
to debug problems when they occur.

> >    - to stop anything else in the system that consumes KVM memslots, e.g. KVM GT
> 
> Is this true if you only look at the KVM_GET_DIRTY_LOG case and consider it
> a guest bug to access the memory (i.e. ignore the strange read-only changes
> which only happen at boot, and which I agree are QEMU-specific)?

Yes?  I don't know exactly what "the KVM_GET_DIRTY_LOG case" is. 
 
> >    - to signal vCPU tasks so that the system doesn't livelock if a vCPU is stuck
> >      outside of KVM, e.g. in get_user_pages_unlocked() (Peter Xu's series)
> 
> This is the more important one but why would it livelock?

Livelock may not be the right word.  Peter's series is addressing a scenario where
a vCPU gets stuck faulting in a page because the page never arrives over the
network.  The solution is to recognize non-fatal signals while trying to fault in
the page.

KVM_KICK_ALL_RUNNING_VCPUS doesn't handle that case because it's obviously not
realistic to check for pending KVM requests while buried deep in mm/ code.  I.e.
userspace also needs to send SIGUSR1 or whatever to ensure all vCPUs get kicked
out of non-KVM code.

That's not the end of the world, and they probably end up being orthogonal things
in userspace code, but it yields a weird API because KVM_KICK_ALL_RUNNING_VCPUS
ends up with the caveat of "oh, by the way, userspace also needs to signal all
vCPU tasks too, otherwise KVM_KICK_ALL_RUNNING_VCPUS might hang".

> > And because of the nature of KVM, to support this API on all architectures, KVM
> > needs to make change on all architectures, whereas userspace should be able to
> > implement a generic solution.
> 
> Yes, I agree that this is essentially just a more efficient kill().
> Emanuele, perhaps you can put together a patch to x86/vmexit.c in
> kvm-unit-tests, where CPU0 keeps changing memslots and the other CPUs are in
> a for(;;) busy wait, to measure the various ways to do it?

I'm a bit confused.  Is the goal of this to simplify QEMU, dedup VMM code, provide
a more performant solution, something else entirely?  I.e. why measure the
performance of x86/vmexit.c?  I have a hard time believing the overhead of pausing
vCPUs is going to be the long pole when it comes to memslot changes.  I assume
rebuilding KVM's page tables because of the "zap all" behavior seems like would
completely dwarf any overhead from pausing vCPUs.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ