[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20211130101958.fcdqthphyhxzvzla@gator.home>
Date: Tue, 30 Nov 2021 11:19:58 +0100
From: Andrew Jones <drjones@...hat.com>
To: Raghavendra Rao Ananta <rananta@...gle.com>
Cc: Marc Zyngier <maz@...nel.org>, James Morse <james.morse@....com>,
Alexandru Elisei <alexandru.elisei@....com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Paolo Bonzini <pbonzini@...hat.com>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>, Peter Shier <pshier@...gle.com>,
Ricardo Koller <ricarkol@...gle.com>,
Oliver Upton <oupton@...gle.com>,
Reiji Watanabe <reijiw@...gle.com>,
Jing Zhang <jingzhangos@...gle.com>,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [RFC PATCH v2 04/11] KVM: arm64: Setup a framework for hypercall
bitmap firmware registers
On Mon, Nov 29, 2021 at 04:56:19PM -0800, Raghavendra Rao Ananta wrote:
> On Sat, Nov 27, 2021 at 9:27 AM Andrew Jones <drjones@...hat.com> wrote:
> >
> > On Tue, Nov 23, 2021 at 10:34:23AM -0800, Raghavendra Rao Ananta wrote:
> > > On Mon, Nov 22, 2021 at 9:23 AM Marc Zyngier <maz@...nel.org> wrote:
> > > > I keep being baffled by this. Why should we track the VMM accesses or
> > > > the VMM writeback? This logic doesn't seem to bring anything useful as
> > > > far as I can tell. All we need to ensure is that what is written to
> > > > the pseudo-register is an acceptable subset of the previous value, and
> > > > I cannot see why this can't be done at write-time.
> > > >
> > > > If you want to hide this behind a capability, fine (although my guts
> > > > feeling is that we don't need that either). But I really want to be
> > > > convinced about all this tracking.
> > > >
> > > The tracking of each owner register is necessary here to safe-guard
> > > the possibility that the user-space may not be aware of a newly
> > > introduced register, and hence, hasn't accessed it. If it had at least
> > > read the register, but not write-back, we assume that the user-space
> > > is happy with the configuration. But the fact that the register has
> > > not even been read would state that user-space is unaware of the
> > > existence of this new register. In such a case, if we don't sanitize
> > > (clear all the bits) this register, the features will be exposed
> > > unconditionally to the guest.
> > >
> > > The capability is introduced here to make sure that this new
> > > infrastructure is backward compatible with old VMMs. If the VMMs don't
> > > enable this capability, they are probably unaware of this, and this
> > > will work as it always has- expose new services to the guest
> > > unconditionally as and when they are introduced.
> >
> > Hi Raghavendra,
> >
> > I don't think we need a CAP that has to be enabled or to make any
> > assumptions or policy decisions in the kernel. I think we just need to
> > provide a bit more information to the VMM when it checks if KVM has the
> > CAP. If KVM would tell the VMM how may pseudo registers there are, which
> > can be done with the return value of the CAP, then the VMM code could be
> > something like this
> >
> > r = check_cap(KVM_CAP_ARM_HVC_FW_REG_BMAP);
> > if (r) {
> > num_regs = r;
> >
> > for (idx = 0; idx < num_regs; ++idx) {
> > reg = hvc_fw_reg(idx);
> >
> > if (idx > vmm_last_known_idx) {
> > ...
> > } else {
> > ...
> > }
> > }
> > }
> >
> > With this, the VMM is free to decide if it wants to clear all registers
> > greater than the last index it was aware of or if it wants to let those
> > registers just get exposed to the guest without knowing what's getting
> > exposed. Along with documenting that by default everything gets exposed
> > by KVM, which is the backwards compatible thing to do, then the VMM has
> > been warned and given everything it needs to manage its guests.
> >
> Hi Andrew,
>
> Thanks for your comments and suggestions!
>
> I like the idea of sharing info via a read of the CAP, and not having
> to explicitly sanitize/clear the registers before the guest begins to
> run.
> However the handshake is done over an API doc, which is a little
> concerning. The user-space must remember and explicitly clear any new
> register that it doesn't want to expose to the guest, while the
> current approach does this automatically.
> Any bug in VMM's implementation could be risky and unintentionally
> expose features to the guest. What do you think?
The VMM can mess things up in many ways. While KVM should protect itself
from the VMM, it shouldn't try to protect the VMM from the VMM itself. In
this case, the risk here isn't that we allow the VMM to do something that
can harm KVM, or even the guest. The risk is only that the VMM fails to do
what it wanted to do (assuming it didn't want to expose unknown features
to the guest). I.e. the risk here is only that the VMM has a bug, and it's
an easily detectable bug. I say let the VMM developers manage it.
>
> > Another thing that might be nice is giving userspace control of how many
> > pseudo registers show up in get-reg-list. In order to migrate from a host
> > with a more recent KVM to a host with an older KVM[*] we should only
> > expose the number of pseudo registers that the older host is aware of.
> > The VMM would zero these registers out anyway, in order to be compatible
> > for migration, but that's not enough when they also show up in the list
> > (at least not with QEMU that aborts migration when the destination
> > expects less registers than what get-reg-list provides)
> >
> > [*] This isn't a great idea, but it'd be nice if we can make it work,
> > because users may want to rollback upgrades or, after migrating to a
> > host with a newer kernel, they may want to migrate back to where they
> > started.
> >
> Good point. But IIUC, if the user-space is able to communicate the
> info that it's expecting a certain get-reg-list, do you think it can
> handle it at its end too, rather than relying on the kernel to send a
> list back?
Yes, I think we can probably manage this in the VMM, and maybe/probably
that's the better place to manage it.
>
> My assumption was that VMM would statically maintain a known set of
> registers that it wants to work with and are to be modified by hand,
> rather than relying on get-reg-list. This could be the least common
> set of registers that are present in all the host kernels (higher or
> lower versions) of the migration fleet. This config doesn't change
> even with get-reg-list declaring a new register as the features
> exposed by it could still be untested. Although, migrating to a host
> with a missing register shouldn't be possible in this setting, but if
> it encounters the scenario, it should be able to avoid migration to
> the host (similar to QEMU).
>
> Please correct me if you think it's a false assumption to proceed with.
Your assumptions align with mine. It seems as we move towards CPU models,
get-reg-list's role will likely only be to confirm a host supports the
minimum required. We should probably implement/change the VMM to allow
migrating from a host with more registers to one with less as long as
the one with less includes all the required registers. Of course we
also need to ensure that any registers we don't want to require are
not exposed to the guest, but I guess that's precisely what we're trying
to do with this series for at least some pseudo registers.
Thanks,
drew
Powered by blists - more mailing lists