lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 26 Oct 2022 19:52:10 +0200
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     Sean Christopherson <seanjc@...gle.com>
Cc:     Christian Borntraeger <borntraeger@...ux.ibm.com>,
        Emanuele Giuseppe Esposito <eesposit@...hat.com>,
        kvm@...r.kernel.org, Jonathan Corbet <corbet@....net>,
        Maxim Levitsky <mlevitsk@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        David Hildenbrand <david@...hat.com>, x86@...nel.org,
        "H. Peter Anvin" <hpa@...or.com>, linux-doc@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Hyper-V VTLs, permission bitmaps and userspace exits (was Re: [PATCH
 0/4] KVM: API to block and resume all running vcpus in a vm)

On 10/26/22 01:07, Sean Christopherson wrote:
> I don't think it's realistic to make accesses outside of KVM_RUN go away, e.g.
> see the ARM ITS discussion in the dirty ring thread.  kvm_xen_set_evtchn() also
> explicitly depends on writing guest memory without going through KVM_RUN (and
> apparently can be invoked from a kernel thread?!?).

Yeah, those are the pages that must be considered dirty when using the 
dirty ring.

> In theory, I do actually like the idea of restricting memory access to KVM_RUN,
> but in reality I just think that forcing everything into KVM_RUN creates far more
> problems than it solves.  E.g. my complaint with KVM_REQ_GET_NESTED_STATE_PAGES
> is that instead of syncrhonously telling userspace it has a problem, KVM chugs
> along as if everything is fine and only fails at later point in time.  I doubt
> userspace would actually do anything differently, i.e. the VM is likely hosed no
> matter what, but deferring work adds complexity in KVM and makes it more difficult
> to debug problems when they occur.
>
>>>     - to stop anything else in the system that consumes KVM memslots, e.g. KVM GT
>>
>> Is this true if you only look at the KVM_GET_DIRTY_LOG case and consider it
>> a guest bug to access the memory (i.e. ignore the strange read-only changes
>> which only happen at boot, and which I agree are QEMU-specific)?
> 
> Yes?  I don't know exactly what "the KVM_GET_DIRTY_LOG case" is.

It is not possible to atomically read the dirty bitmap and delete a 
memslot.  When you delete a memslot, the bitmap is gone.  In this case 
however memory accesses to the deleted memslot are a guest bug, so 
stopping KVM-GT would not be necessary.

So while I'm being slowly convinced that QEMU should find a way to pause 
its vCPUs around memslot changes, I'm not sure that pausing everything 
is needed in general.

>>> And because of the nature of KVM, to support this API on all architectures, KVM
>>> needs to make change on all architectures, whereas userspace should be able to
>>> implement a generic solution.
>>
>> Yes, I agree that this is essentially just a more efficient kill().
>> Emanuele, perhaps you can put together a patch to x86/vmexit.c in
>> kvm-unit-tests, where CPU0 keeps changing memslots and the other CPUs are in
>> a for(;;) busy wait, to measure the various ways to do it?
> 
> I'm a bit confused.  Is the goal of this to simplify QEMU, dedup VMM code, provide
> a more performant solution, something else entirely?

Well, a bit of all of them and perhaps that's the problem.  And while 
the issues at hand *are* self-inflicted wounds on part of QEMU, it seems 
to me that the underlying issues are general.

For example, Alex Graf and I looked back at your proposal of a userspace 
exit for "bad" accesses to memory, wondering if it could help with 
Hyper-V VTLs too.  To recap, the "higher privileged" code at VTL1 can 
set up VM-wide restrictions on access to some pages through a hypercall 
(HvModifyVtlProtectionMask).  After the hypercall, VTL0 would not be 
able to access those pages.  The hypercall would be handled in userspace 
and would invoke a KVM_SET_MEMORY_REGION_PERM ioctl to restrict the RWX 
permissions, and this ioctl would set up a VM-wide permission bitmap 
that would be used when building page tables.

Using such a bitmap instead of memslots makes it possible to cause 
userspace vmexits on VTL mapping violations with efficient data 
structures.  And it would also be possible to use this mechanism around 
KVM_GET_DIRTY_LOG, to read the KVM dirty bitmap just before removing a 
memslot.

However, external accesses to the regions (ITS, Xen, KVM-GT, non KVM_RUN 
ioctls) would not be blocked, due to the lack of a way to report the 
exit.  The intersection of these features with VTLs should be very small 
(sometimes zero since VTLs are x86 only), but the ioctls would be a 
problem so I'm wondering what your thoughts are on this.

Also, while the exit API could be the same, it is not clear to me that 
the permission bitmap would be a good match for entirely "void" memslots 
used to work around non-atomic memslot changes.  So for now let's leave 
this aside and only consider the KVM_GET_DIRTY_LOG case.

Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ