[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220706195027.76026-3-parri.andrea@gmail.com>
Date: Wed, 6 Jul 2022 21:50:27 +0200
From: "Andrea Parri (Microsoft)" <parri.andrea@...il.com>
To: Christoph Hellwig <hch@....de>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Robin Murphy <robin.murphy@....com>,
KY Srinivasan <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
Wei Liu <wei.liu@...nel.org>, Dexuan Cui <decui@...rosoft.com>,
Michael Kelley <mikelley@...rosoft.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Peter Anvin <hpa@...or.com>
Cc: linux-kernel@...r.kernel.org, iommu@...ts.linux-foundation.org,
iommu@...ts.linux.dev, linux-hyperv@...r.kernel.org,
x86@...nel.org, "Andrea Parri (Microsoft)" <parri.andrea@...il.com>
Subject: [RFC PATCH 2/2] dma-direct: Fix dma_direct_{alloc,free}() for Hyperv-V IVMs
In Hyper-V AMD SEV-SNP Isolated VMs, the virtual address returned by
dma_direct_alloc() must map above dma_unencrypted_base because the
memory is shared with the hardware device and must not be encrypted.
Modify dma_direct_alloc() to do the necessary remapping. In
dma_direct_free(), use the (unmodified) DMA address to derive the
original virtual address and re-encrypt the pages.
Suggested-by: Michael Kelley <mikelley@...rosoft.com>
Co-developed-by: Dexuan Cui <decui@...rosoft.com>
Signed-off-by: Dexuan Cui <decui@...rosoft.com>
Signed-off-by: Andrea Parri (Microsoft) <parri.andrea@...il.com>
---
kernel/dma/direct.c | 30 +++++++++++++++++++++++++++++-
1 file changed, 29 insertions(+), 1 deletion(-)
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 06b2b901e37a3..c4ce277687a49 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -13,6 +13,7 @@
#include <linux/vmalloc.h>
#include <linux/set_memory.h>
#include <linux/slab.h>
+#include <linux/io.h> /* for memremap() */
#include "direct.h"
/*
@@ -305,6 +306,21 @@ void *dma_direct_alloc(struct device *dev, size_t size,
ret = page_address(page);
if (dma_set_decrypted(dev, ret, size))
goto out_free_pages;
+#ifdef CONFIG_HAS_IOMEM
+ /*
+ * Remap the pages in the unencrypted physical address space
+ * when dma_unencrypted_base is set (e.g., for Hyper-V AMD
+ * SEV-SNP isolated guests).
+ */
+ if (dma_unencrypted_base) {
+ phys_addr_t ret_pa = virt_to_phys(ret);
+
+ ret_pa += dma_unencrypted_base;
+ ret = memremap(ret_pa, size, MEMREMAP_WB);
+ if (!ret)
+ goto out_encrypt_pages;
+ }
+#endif
}
memset(ret, 0, size);
@@ -360,11 +376,23 @@ void dma_direct_free(struct device *dev, size_t size,
dma_free_from_pool(dev, cpu_addr, PAGE_ALIGN(size)))
return;
- if (is_vmalloc_addr(cpu_addr)) {
+ /*
+ * If dma_unencrypted_base is set, the virtual address returned by
+ * dma_direct_alloc() is in the vmalloc address range.
+ */
+ if (!dma_unencrypted_base && is_vmalloc_addr(cpu_addr)) {
vunmap(cpu_addr);
} else {
if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
arch_dma_clear_uncached(cpu_addr, size);
+#ifdef CONFIG_HAS_IOMEM
+ if (dma_unencrypted_base) {
+ memunmap(cpu_addr);
+ /* re-encrypt the pages using the original address */
+ cpu_addr = page_address(pfn_to_page(PHYS_PFN(
+ dma_to_phys(dev, dma_addr))));
+ }
+#endif
if (dma_set_encrypted(dev, cpu_addr, size))
return;
}
--
2.25.1
Powered by blists - more mailing lists