[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7462738f-e837-cd99-f441-8e7c29d250cd@arm.com>
Date: Tue, 7 Feb 2023 17:50:58 +0000
From: James Morse <james.morse@....com>
To: Marc Zyngier <maz@...nel.org>
Cc: linux-pm@...r.kernel.org, loongarch@...ts.linux.dev,
kvmarm@...ts.linux.dev, kvm@...r.kernel.org,
linux-acpi@...r.kernel.org, linux-arch@...r.kernel.org,
linux-ia64@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, x86@...nel.org,
Thomas Gleixner <tglx@...utronix.de>,
Lorenzo Pieralisi <lpieralisi@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Sudeep Holla <sudeep.holla@....com>,
Borislav Petkov <bp@...en8.de>, H Peter Anvin <hpa@...or.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
Huacai Chen <chenhuacai@...nel.org>,
Suzuki K Poulose <suzuki.poulose@....com>,
Oliver Upton <oliver.upton@...ux.dev>,
Len Brown <lenb@...nel.org>,
Rafael Wysocki <rafael@...nel.org>,
WANG Xuerui <kernel@...0n.name>,
Salil Mehta <salil.mehta@...wei.com>,
Russell King <linux@...linux.org.uk>,
Jean-Philippe Brucker <jean-philippe@...aro.org>
Subject: Re: [RFC PATCH 29/32] KVM: arm64: Pass hypercalls to userspace
Hi Marc,
On 05/02/2023 10:12, Marc Zyngier wrote:
> On Fri, 03 Feb 2023 13:50:40 +0000,
> James Morse <james.morse@....com> wrote:
>>
>> From: Jean-Philippe Brucker <jean-philippe@...aro.org>
>>
>> When capability KVM_CAP_ARM_HVC_TO_USER is available, userspace can
>> request to handle all hypercalls that aren't handled by KVM. With the
>> help of another capability, this will allow userspace to handle PSCI
>> calls.
> On top of Oliver's ask not to make this a blanket "steal everything",
> but instead to have an actual request for ranges of forwarded
> hypercalls:
>
>> Notes on this implementation:
>>
>> * A similar mechanism was proposed for SDEI some time ago [1]. This RFC
>> generalizes the idea to all hypercalls, since that was suggested on
>> the list [2, 3].
>>
>> * We're reusing kvm_run.hypercall. I copied x0-x5 into
>> kvm_run.hypercall.args[] to help userspace but I'm tempted to remove
>> this, because:
>> - Most user handlers will need to write results back into the
>> registers (x0-x3 for SMCCC), so if we keep this shortcut we should
>> go all the way and read them back on return to kernel.
>> - QEMU doesn't care about this shortcut, it pulls all vcpu regs before
>> handling the call.
>> - SMCCC uses x0-x16 for parameters.
>> x0 does contain the SMCCC function ID and may be useful for fast
>> dispatch, we could keep that plus the immediate number.
>>
>> * Add a flag in the kvm_run.hypercall telling whether this is HVC or
>> SMC? Can be added later in those bottom longmode and pad fields.
> We definitely need this. A nested hypervisor can (and does) use SMCs
> as the conduit.
Christoffer's comments last time round on this was that EL2 guests get SMC with this,
and EL1 guests get HVC. The VMM could never get both...
> The question is whether they represent two distinct
> namespaces or not. I *think* we can unify them, but someone should
> check and maybe get clarification from the owners of the SMCCC spec.
i.e. the VMM requests 0xC400_0000:0xC400_001F regardless of SMC/HVC?
I don't yet see how a VMM could get HVC out of a virtual-EL2 guest....
>> * On top of this we could share with userspace which HVC ranges are
>> available and which ones are handled by KVM. That can actually be added
>> independently, through a vCPU/VM device attribute which doesn't consume
>> a new ioctl:
>> - userspace issues HAS_ATTR ioctl on the vcpu fd to query whether this
>> feature is available.
>> - userspace queries the number N of HVC ranges using one GET_ATTR.
>> - userspace passes an array of N ranges using another GET_ATTR. The
>> array is filled and returned by KVM.
> As mentioned above, I think this interface should go both ways.
> Userspace should request the forwarding of a certain range of
> hypercalls via a similar SET_ATTR interface.
Yup, I'll sync up with Oliver about that.
> Another question is how we migrate VMs that have these forwarding
> requirements. Do we expect the VMM to replay the forwarding as part of
> the setting up on the other side? Or do we save/restore this via a
> firmware pseudo-register?
Pfff. VMMs problem. Enabling these things means it has its own internal state to migrate.
(is this vCPU on or off?), I doubt it needs reminding that the state exists.
That said, Salil is looking at making this work with migration in Qemu.
Thanks,
James
Powered by blists - more mailing lists