[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <86frqpvyn2.wl-maz@kernel.org>
Date: Wed, 28 Aug 2024 09:29:21 +0100
From: Marc Zyngier <maz@...nel.org>
To: D Scott Phillips <scott@...amperecomputing.com>
Cc: Alex Bennée <alex.bennee@...aro.org>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
arnd@...aro.org
Subject: Re: [PATCH 1/3] ampere/arm64: Add a fixup handler for alignment faults in aarch64 code
On Tue, 27 Aug 2024 22:23:16 +0100,
D Scott Phillips <scott@...amperecomputing.com> wrote:
>
> Alex Bennée <alex.bennee@...aro.org> writes:
>
> > From: D Scott Phillips <scott@...amperecomputing.com>
> >
> > A later patch will hand out Device memory in some cases to code
> > which expects a Normal memory type, as an errata workaround.
> > Unaligned accesses to Device memory will fault though, so here we
> > add a fixup handler to emulate faulting accesses, at a performance
> > penalty.
> >
> > Many of the instructions in the Loads and Stores group are supported,
> > but these groups are not handled here:
> >
> > * Advanced SIMD load/store multiple structures
> > * Advanced SIMD load/store multiple structures (post-indexed)
> > * Advanced SIMD load/store single structure
> > * Advanced SIMD load/store single structure (post-indexed)
>
> Hi Alex, I'm keeping my version of these patches here:
>
> https://github.com/AmpereComputing/linux-ampere-altra-erratum-pcie-65
>
> It looks like the difference to the version you've harvested is that
> I've since added handling for these load/store types:
>
> Advanced SIMD load/store multiple structure
> Advanced SIMD load/store multiple structure (post-indexed)
> Advanced SIMD load/store single structure
> Advanced SIMD load/store single structure (post-indexed)
>
> I've never sent these patches because in my opinion there's too much
> complexity to maintain upstream for this workaround, though now they're
> here so we can have that conversation.
>
> Finally, I think a better approach overall would have been to have
> device memory mapping in the stage 2 page table, then booting with pkvm
> would have this workaround for both the host and guests. I don't think
> that approach changes the fact that there's too much complexity here.
That's a rather bad idea. You are then asking the hypervisor to proxy
the accesses that the guest performs. Two issues with that:
- the hypervisor now needs to keep a mapping of the device in its own
S1, which is pretty fun given that the mapping can change location
if BARs get remapped behind the hypervisor's back.
- you are asking the hypervisor to trust the nature of the access. If
you get an error response from the device because the access cannot
be handled (the size, for example), you are likely to get an SError,
which will bring the whole system down.
You can do that to some extent for when you completely control the
programming model of the devices. KVM does that for GICv2, for
example. But in general? No.
Furthermore, adding an instruction emulator to KVM is totally out of
the question. If something has to be emulated, that's for userspace to
take care of it. Any notion of performance has gone through the window
anyway, so that doesn't make much of a difference.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
Powered by blists - more mailing lists