[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Z4WN3_wUZ1H_e7ou@google.com>
Date: Mon, 13 Jan 2025 14:04:15 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Marc Zyngier <maz@...nel.org>
Cc: Paolo Bonzini <pbonzini@...hat.com>, Oliver Upton <oliver.upton@...ux.dev>,
Michael Ellerman <mpe@...erman.id.au>, kvm@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/5] KVM: Add a common kvm_run flag to communicate an exit
needs completion
On Mon, Jan 13, 2025, Marc Zyngier wrote:
> On Mon, 13 Jan 2025 18:58:45 +0000,
> Sean Christopherson <seanjc@...gle.com> wrote:
> >
> > On Mon, Jan 13, 2025, Marc Zyngier wrote:
> > > On Mon, 13 Jan 2025 15:44:28 +0000,
> > > Sean Christopherson <seanjc@...gle.com> wrote:
> > > >
> > > > On Sat, Jan 11, 2025, Marc Zyngier wrote:
> > > > > Yet, you don't amend arm64 to publish that flag. Not that I think this
> > > > > causes any issue (even if you save the state at that point without
> > > > > reentering the guest, it will be still be consistent), but that
> > > > > directly contradicts the documentation (isn't that ironic? ;-).
> > > >
> > > > It does cause issues, I missed this code in kvm_arch_vcpu_ioctl_run():
> > > >
> > > > if (run->exit_reason == KVM_EXIT_MMIO) {
> > > > ret = kvm_handle_mmio_return(vcpu);
> > > > if (ret <= 0)
> > > > return ret;
> > > > }
> > >
> > > That's satisfying a load from the guest forwarded to userspace.
> >
> > And MMIO stores, no? I.e. PC needs to be incremented on stores as well.
>
> Yes, *after* the store as completed. If you replay the instruction,
> the same store comes out.
>
> > > If the VMM did a save of the guest at this stage, restored and resumed it,
> > > *nothing* bad would happen, as PC still points to the instruction that got
> > > forwarded. You'll see the same load again.
> >
> > But replaying an MMIO store could cause all kinds of problems, and even MMIO
> > loads could theoretically be problematic, e.g. if there are side effects in the
> > device that trigger on access to a device register.
>
> But that's the VMM's problem. If it has modified its own state and
> doesn't return to the guest to complete the instruction, that's just
> as bad as a load, which *do* have side effects as well.
Agreed, just wanted to make sure I wasn't completely misunderstanding something
about arm64.
> Overall, the guest state exposed by KVM is always correct, and
> replaying the instruction is not going to change that. It is if the
> VMM is broken that things turn ugly *for the VMM itself*,
> and I claim that no amount of flag being added is going to help that.
On x86 at least, adding KVM_RUN_NEEDS_COMPLETION reduces the chances for human
error. x86 has had bugs in both KVM (patch 1) and userspace (Google's VMM when
handling MSR exits) that would have been avoided if KVM_RUN_NEEDS_COMPLETION existed.
Unless the VMM is doing something decidely odd, userspace needs to write code once
(maybe even just once for all architectures). For KVM, the flag is set based on
whether or not the vCPU has a valid completion callback, i.e. will be correct so
long as the underlying KVM code is correct.
Contrast that with the current approach, where the KVM developer needs to get
the KVM code correct and remember to update KVM's documentation. Documentation
is especially problematic, because in practice it can't be tested, i.e. is much
more likely to be missed by the developer and the maintainer. The VMM either
needs to blindly redo KVM_RUN (as selftests do, and apparently as QEMU does), or
the developer adding VMM support needs to be diligent in reading KVM's documentation.
And like KVM documentation, testing that the VMM is implemented to KVM's "spec"
is effectively impossible in practice, because 99.9999% of the time userspace
exits and save/restore will work just fine.
I do agree that the VMM is likely going to run into problems sooner or later if
the developers/maintainers don't fundamentally understand the need to redo KVM_RUN,
but I also think there's significant value in reducing the chances for simple
human error to result in broken VMs.
Powered by blists - more mailing lists