lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 24 Oct 2022 22:45:16 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     Christian Borntraeger <borntraeger@...ux.ibm.com>
Cc:     Emanuele Giuseppe Esposito <eesposit@...hat.com>,
        kvm@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
        Jonathan Corbet <corbet@....net>,
        Maxim Levitsky <mlevitsk@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        David Hildenbrand <david@...hat.com>, x86@...nel.org,
        "H. Peter Anvin" <hpa@...or.com>, linux-doc@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/4] KVM: API to block and resume all running vcpus in a
 vm

On Mon, Oct 24, 2022, Christian Borntraeger wrote:
> Am 24.10.22 um 10:33 schrieb Emanuele Giuseppe Esposito:
> > Am 24/10/2022 um 09:56 schrieb Christian Borntraeger:
> > > > Therefore the simplest solution is to pause all vcpus in the kvm
> > > > side, so that:

Simplest for QEMU maybe, most definitely not simplest for KVM.

> > > > - userspace just needs to call the new API before making memslots
> > > > changes, keeping modifications to the minimum
> > > > - dirty page updates are also performed when vcpus are blocked, so
> > > > there is no time window between the dirty page ioctl and memslots
> > > > modifications, since vcpus are all stopped.
> > > > - no need to modify the existing memslots API
> > > Isnt QEMU able to achieve the same goal today by forcing all vCPUs
> > > into userspace with a signal? Can you provide some rationale why this
> > > is better in the cover letter or patch description?
> > > 
> > David Hildenbrand tried to propose something similar here:
> > https://github.com/davidhildenbrand/qemu/commit/86b1bf546a8d00908e33f7362b0b61e2be8dbb7a
> > 
> > While it is not optimized, I think it's more complex that the current
> > serie, since qemu should also make sure all running ioctls finish and
> > prevent the new ones from getting executed.
> > 
> > Also we can't use pause_all_vcpus()/resume_all_vcpus() because they drop
> > the BQL.
> > 
> > Would that be ok as rationale?
> 
> Yes that helps and should be part of the cover letter for the next iterations.

But that doesn't explain why KVM needs to get involved, it only explains why QEMU
can't use its existing pause_all_vcpus().  I do not understand why this is a
problem QEMU needs KVM's help to solve.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ