lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 02 Dec 2022 09:40:04 +0000
From:   Marc Zyngier <maz@...nel.org>
To:     Akihiko Odaki <akihiko.odaki@...nix.com>
Cc:     linux-kernel@...r.kernel.org, kvmarm@...ts.linux.dev,
        kvmarm@...ts.cs.columbia.edu, linux-arm-kernel@...ts.infradead.org,
        Mathieu Poirier <mathieu.poirier@...aro.org>,
        Oliver Upton <oliver.upton@...ux.dev>,
        Suzuki K Poulose <suzuki.poulose@....com>,
        Alexandru Elisei <alexandru.elisei@....com>,
        James Morse <james.morse@....com>,
        Will Deacon <will@...nel.org>,
        Catalin Marinas <catalin.marinas@....com>,
        asahi@...ts.linux.dev, Alyssa Rosenzweig <alyssa@...enzweig.io>,
        Sven Peter <sven@...npeter.dev>,
        Hector Martin <marcan@...can.st>
Subject: Re: [PATCH 0/3] KVM: arm64: Handle CCSIDR associativity mismatches

On Fri, 02 Dec 2022 05:17:12 +0000,
Akihiko Odaki <akihiko.odaki@...nix.com> wrote:
> 
> >> On M2 MacBook Air, I have seen no other difference in standard ID
> >> registers and CCSIDRs are exceptions. Perhaps Apple designed this way
> >> so that macOS's Hypervisor can freely migrate vCPU, but I can't assure
> >> that without more analysis. This is still enough to migrate vCPU
> >> running Linux at least.
> > 
> > I guess that MacOS hides more of the underlying HW than KVM does. And
> > KVM definitely doesn't hide the MIDR_EL1 registers, which *are*
> > different between the two clusters.
> 
> It seems KVM stores a MIDR value of a CPU and reuse it as "invariant"
> value for ioctls while it exposes the MIDR value each physical CPU
> owns to vCPU.

This only affects the VMM though, and not the guest which sees the
MIDR of the CPU it runs on. The problem is that at short of pinning
the vcpus, you don't know where they will run. So any value is fair
game.

> This may be a problem worth fixing. My understanding is that while
> there is no serious application which requires vCPU migration among
> physical clusters,

Hey, I do that all the time with kvmtool! It's just that my guest do
not care about being run on a CPU or another.

> crosvm uses KVM on big.LITTLE processors by pinning
> vCPU to physical CPU, and it is a real-world application which needs
> to be supported.
> 
> For an application like crosvm, you would expect the vCPU thread gets
> the MIDR value of the physical CPU which the thread is pinned to when
> it calls ioctl, but it can get one of another arbitrary CPU in
> reality.

No. It will get the MIDR of the CPU it runs on. Check again. What you
describing above is solely for userspace.

> 
> Fixing this problem will pose two design questions:
> 
> 1. Should it expose a value consistent among clusters?
> 
> For example, we can change the KVM initialization code so that it
> initializes VPIDR with the value stored as "invariant". This would
> help migrating vCPU among clusters, but if you pin each vCPU thread to
> a distinct phyiscal CPU, you may rather want the vCPU to see the MIDR
> value specific to each physical CPU and to apply quirks or tuning
> parameters according to the value.

Which is what happens. Not at the cluster level, but at the CPU
level. The architecture doesn't describe what a *cluster* is.

> 2. Should it be invariant or variable?
> 
> Fortunately making it variable is easy. Arm provides VPIDR_EL1
> register to specify the value exposed as MPIDR_EL0 so there is no
> trapping cost.

And if you do that you make it impossible for the guest to mitigate
errata, as most of the errata handling is based on the MIDR values.

> ...or we may just say the value of MPIDR_EL0 (and possibly other

I assume you meant MIDR_EL1 here, as MPIDR_EL1 is something else (and
it has no _EL0 equivalent).

> "invariant" registers) exposed via ioctl are useless and deprecated.

Useless? Not really. The all are meaningful to the guest, and a change
there will cause issues.

CTR_EL0 must, for example, be an invariant. Otherwise, you need to
trap all the CMOs when the {I,D}minLine values that are restored from
userspace are bigger than the ones the HW has. Even worse, when the
DIC/IDC bits are set from userspace while the HW has them cleared: you
cannot mitigate that one, and you'll end up with memory corruption.

I've been toying with the idea of exposing to guests the list of
MIDR/REVIDR the guest is allowed to run on, as a PV service. This
would allow that guest to enable all the mitigations it wants in one
go.

Not sure I have time for this at the moment, but that'd be something
to explore.

[...]

> > So let's first build on top of HCR_EL2.TID2, and only then once we
> > have an idea of the overhead add support for HCR_EL2.TID4 for the
> > systems that have FEAT_EVT.
> 
> That sounds good, I'll write a new series according to this idea.

Thanks!

	M.

-- 
Without deviation from the norm, progress is not possible.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ