[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZsxbZMxOIP795qPM@arm.com>
Date: Mon, 26 Aug 2024 13:39:32 +0300
From: Catalin Marinas <catalin.marinas@....com>
To: Steven Price <steven.price@....com>
Cc: kvm@...r.kernel.org, kvmarm@...ts.linux.dev,
Marc Zyngier <maz@...nel.org>, Will Deacon <will@...nel.org>,
James Morse <james.morse@....com>,
Oliver Upton <oliver.upton@...ux.dev>,
Suzuki K Poulose <suzuki.poulose@....com>,
Zenghui Yu <yuzenghui@...wei.com>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
Joey Gouly <joey.gouly@....com>,
Alexandru Elisei <alexandru.elisei@....com>,
Christoffer Dall <christoffer.dall@....com>,
Fuad Tabba <tabba@...gle.com>, linux-coco@...ts.linux.dev,
Ganapatrao Kulkarni <gankulkarni@...amperecomputing.com>,
Gavin Shan <gshan@...hat.com>,
Shanker Donthineni <sdonthineni@...dia.com>,
Alper Gun <alpergun@...gle.com>
Subject: Re: [PATCH v5 14/19] arm64: Enforce bounce buffers for realm DMA
On Mon, Aug 19, 2024 at 02:19:19PM +0100, Steven Price wrote:
> Within a realm guest it's not possible for a device emulated by the VMM
> to access arbitrary guest memory. So force the use of bounce buffers to
> ensure that the memory the emulated devices are accessing is in memory
> which is explicitly shared with the host.
>
> This adds a call to swiotlb_update_mem_attributes() which calls
> set_memory_decrypted() to ensure the bounce buffer memory is shared with
> the host. For non-realm guests or hosts this is a no-op.
>
> Co-developed-by: Suzuki K Poulose <suzuki.poulose@....com>
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@....com>
> Signed-off-by: Steven Price <steven.price@....com>
> ---
> v3: Simplify mem_init() by using a 'flags' variable.
> ---
> arch/arm64/kernel/rsi.c | 1 +
> arch/arm64/mm/init.c | 10 +++++++++-
> 2 files changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/kernel/rsi.c b/arch/arm64/kernel/rsi.c
> index 5c2c977a50fb..69d8d9791c65 100644
> --- a/arch/arm64/kernel/rsi.c
> +++ b/arch/arm64/kernel/rsi.c
> @@ -6,6 +6,7 @@
> #include <linux/jump_label.h>
> #include <linux/memblock.h>
> #include <linux/psci.h>
> +#include <linux/swiotlb.h>
>
> #include <asm/io.h>
> #include <asm/rsi.h>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 9b5ab6818f7f..1d595b63da71 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -41,6 +41,7 @@
> #include <asm/kvm_host.h>
> #include <asm/memory.h>
> #include <asm/numa.h>
> +#include <asm/rsi.h>
> #include <asm/sections.h>
> #include <asm/setup.h>
> #include <linux/sizes.h>
> @@ -369,8 +370,14 @@ void __init bootmem_init(void)
> */
> void __init mem_init(void)
> {
> + unsigned int flags = SWIOTLB_VERBOSE;
> bool swiotlb = max_pfn > PFN_DOWN(arm64_dma_phys_limit);
>
> + if (is_realm_world()) {
> + swiotlb = true;
> + flags |= SWIOTLB_FORCE;
> + }
> +
> if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && !swiotlb) {
> /*
> * If no bouncing needed for ZONE_DMA, reduce the swiotlb
> @@ -382,7 +389,8 @@ void __init mem_init(void)
> swiotlb = true;
> }
>
> - swiotlb_init(swiotlb, SWIOTLB_VERBOSE);
> + swiotlb_init(swiotlb, flags);
> + swiotlb_update_mem_attributes();
IIRC Will mentioned on a previous version of this series: what do we do
with the kmalloc() minalign bouncing (or other bouncing)? I think this
would only work if the device is shared.
I'm more and more inclined to only support shared devices with this
series (no dev assignment) and make it a strict dependence on RMM 1.0.
Running it in a different configuration with private devices will fall
apart. With this condition, the patch looks fine:
Reviewed-by: Catalin Marinas <catalin.marinas@....com>
Powered by blists - more mailing lists