[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231012123541.GB11824@willie-the-truck>
Date: Thu, 12 Oct 2023 13:35:41 +0100
From: Will Deacon <will@...nel.org>
To: Lorenzo Pieralisi <lpieralisi@...nel.org>
Cc: Catalin Marinas <catalin.marinas@....com>,
Jason Gunthorpe <jgg@...dia.com>, ankita@...dia.com,
maz@...nel.org, oliver.upton@...ux.dev, aniketa@...dia.com,
cjia@...dia.com, kwankhede@...dia.com, targupta@...dia.com,
vsethi@...dia.com, acurrid@...dia.com, apopple@...dia.com,
jhubbard@...dia.com, danw@...dia.com,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1 2/2] KVM: arm64: allow the VM to select DEVICE_* and
NORMAL_NC for IO memory
On Thu, Oct 05, 2023 at 11:56:55AM +0200, Lorenzo Pieralisi wrote:
> On Tue, Sep 26, 2023 at 02:52:13PM +0100, Catalin Marinas wrote:
>
> [...]
>
> > Anyway, the text looks fine to me. Thanks for putting it together
> > Lorenzo.
>
> Thanks !
>
> > One thing not mentioned here is that pci-vfio still maps such memory as
> > Device-nGnRnE in user space and relaxing this potentially creates an
> > alias. But such alias is only relevant of both the VMM and the VM try to
> > access the same device which I doubt is a realistic scenario.
>
> A revised log, FWIW:
Thanks for putting this together, Lorenzo. Just one thing below:
> ---
> Currently, KVM for ARM64 maps at stage 2 memory that is
> considered device (ie it is not RAM) with DEVICE_nGnRE
> memory attributes; this setting overrides (as per the ARM
> architecture [1]) any device MMIO mapping present at stage
> 1, resulting in a set-up whereby a guest operating system
> can't determine device MMIO mapping memory attributes on its
> own but it is always overriden by the KVM stage 2 default.
>
> This set-up does not allow guest operating systems to select
> device memory attributes on a page by page basis independently
> from KVM stage-2 mappings (refer to [1], "Combining stage 1 and stage
> 2 memory type attributes"), which turns out to be an issue in that
> guest operating systems (eg Linux) may request to map
> devices MMIO regions with memory attributes that guarantee
> better performance (eg gathering attribute - that for some
> devices can generate larger PCIe memory writes TLPs)
> and specific operations (eg unaligned transactions) such as
> the NormalNC memory type.
>
> The default device stage 2 mapping was chosen in KVM
> for ARM64 since it was considered safer (ie it would
> not allow guests to trigger uncontained failures
> ultimately crashing the machine) but this turned out
> to be imprecise.
>
> Failures containability is a property of the platform
> and is independent from the memory type used for MMIO
> device memory mappings (ie DEVICE_nGnRE memory type is
> even more problematic than NormalNC in terms of containability
> since eg aborts triggered on loads cannot be made synchronous,
> which make them harder to contain); this means that,
> regardless of the combined stage1+stage2 mappings a
> platform is safe if and only if device transactions cannot trigger
> uncontained failures; reworded, the default KVM device
> stage 2 memory attributes play no role in making device
> assignment safer for a given platform and therefore can
> be relaxed.
>
> For all these reasons, relax the KVM stage 2 device
> memory attributes from DEVICE_nGnRE to NormalNC.
The reasoning above suggests to me that this should probably just be
Normal cacheable, as that is what actually allows the guest to control
the attributes. So what is the rationale behind stopping at Normal-NC?
Will
Powered by blists - more mailing lists