[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201127172434.GA984327@google.com>
Date: Fri, 27 Nov 2020 17:24:34 +0000
From: Quentin Perret <qperret@...gle.com>
To: Marc Zyngier <maz@...nel.org>
Cc: Will Deacon <will@...nel.org>,
linux-arm-kernel@...ts.infradead.org, linux-arch@...r.kernel.org,
linux-kernel@...r.kernel.org,
Catalin Marinas <catalin.marinas@....com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Morten Rasmussen <morten.rasmussen@....com>,
Qais Yousef <qais.yousef@....com>,
Suren Baghdasaryan <surenb@...gle.com>,
Tejun Heo <tj@...nel.org>, Li Zefan <lizefan@...wei.com>,
Johannes Weiner <hannes@...xchg.org>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
kernel-team@...roid.com
Subject: Re: [PATCH v4 03/14] KVM: arm64: Kill 32-bit vCPUs on systems with
mismatched EL0 support
On Friday 27 Nov 2020 at 17:14:11 (+0000), Marc Zyngier wrote:
> On 2020-11-27 11:53, Will Deacon wrote:
> > On Fri, Nov 27, 2020 at 10:26:47AM +0000, Marc Zyngier wrote:
> > > On 2020-11-24 15:50, Will Deacon wrote:
> > > > If a vCPU is caught running 32-bit code on a system with mismatched
> > > > support at EL0, then we should kill it.
> > > >
> > > > Acked-by: Marc Zyngier <maz@...nel.org>
> > > > Signed-off-by: Will Deacon <will@...nel.org>
> > > > ---
> > > > arch/arm64/kvm/arm.c | 11 ++++++++++-
> > > > 1 file changed, 10 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > > > index 5750ec34960e..d322ac0f4a8e 100644
> > > > --- a/arch/arm64/kvm/arm.c
> > > > +++ b/arch/arm64/kvm/arm.c
> > > > @@ -633,6 +633,15 @@ static void check_vcpu_requests(struct kvm_vcpu
> > > > *vcpu)
> > > > }
> > > > }
> > > >
> > > > +static bool vcpu_mode_is_bad_32bit(struct kvm_vcpu *vcpu)
> > > > +{
> > > > + if (likely(!vcpu_mode_is_32bit(vcpu)))
> > > > + return false;
> > > > +
> > > > + return !system_supports_32bit_el0() ||
> > > > + static_branch_unlikely(&arm64_mismatched_32bit_el0);
> > > > +}
> > > > +
> > > > /**
> > > > * kvm_arch_vcpu_ioctl_run - the main VCPU run function to execute
> > > > guest code
> > > > * @vcpu: The VCPU pointer
> > > > @@ -816,7 +825,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
> > > > * with the asymmetric AArch32 case), return to userspace with
> > > > * a fatal error.
> > > > */
> > > > - if (!system_supports_32bit_el0() && vcpu_mode_is_32bit(vcpu)) {
> > > > + if (vcpu_mode_is_bad_32bit(vcpu)) {
> > > > /*
> > > > * As we have caught the guest red-handed, decide that
> > > > * it isn't fit for purpose anymore by making the vcpu
> > >
> > > Given the new definition of system_supports_32bit_el0() in the
> > > previous
> > > patch,
> > > why do we need this patch at all?
> >
> > I think the check is still needed, as this is an unusual case where we
> > want to reject the mismatched system. For example, imagine
> > 'arm64_mismatched_32bit_el0' is true and we're on a mismatched system:
> > in
> > this case system_supports_32bit_el0() will return 'true' because we
> > allow 32-bit applications to run, we support the 32-bit personality etc.
> >
> > However, we still want to terminate 32-bit vCPUs if we spot them in this
> > situation, so we have to check for:
> >
> > !system_supports_32bit_el0() ||
> > static_branch_unlikely(&arm64_mismatched_32bit_el0)
> >
> > so that we only allow 32-bit vCPUs when all of the physical CPUs support
> > it at EL0.
> >
> > I could make this clearer either by adding a comment, or avoiding
> > system_supports_32bit_el0() entirely here and just checking the
> > sanitised SYS_ID_AA64PFR0_EL1 register directly instead.
> >
> > What do you prefer?
>
> Yeah, the sanitized read feels better, if only because that is
> what we are going to read in all the valid cases, unfortunately.
> read_sanitised_ftr_reg() is sadly not designed to be called on
> a fast path, meaning that 32bit guests will do a bsearch() on
> the ID-regs every time they exit...
>
> I guess we will have to evaluate how much we loose with this.
Could we use the trick we have for arm64_ftr_reg_ctrel0 to speed this
up?
Thanks,
Quentin
Powered by blists - more mailing lists