[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20231017202505.340906-7-rick.p.edgecombe@intel.com>
Date: Tue, 17 Oct 2023 13:25:01 -0700
From: Rick Edgecombe <rick.p.edgecombe@...el.com>
To: x86@...nel.org, tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com, hpa@...or.com, luto@...nel.org,
peterz@...radead.org, kirill.shutemov@...ux.intel.com,
elena.reshetova@...el.com, isaku.yamahata@...el.com,
seanjc@...gle.com, Michael Kelley <mikelley@...rosoft.com>,
thomas.lendacky@....com, decui@...rosoft.com,
sathyanarayanan.kuppuswamy@...ux.intel.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-s390@...r.kernel.org
Cc: rick.p.edgecombe@...el.com, Christoph Hellwig <hch@....de>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Robin Murphy <robin.murphy@....com>, iommu@...ts.linux.dev
Subject: [PATCH 06/10] dma: Use free_decrypted_pages()
On TDX it is possible for the untrusted host to cause
set_memory_encrypted() or set_memory_decrypted() to fail such that an
error is returned and the resulting memory is shared. Callers need to take
care to handle these errors to avoid returning decrypted (shared) memory to
the page allocator, which could lead to functional or security issues.
DMA could free decrypted/shared pages if set_memory_decrypted() fails.
Use the recently added free_decrypted_pages() to avoid this.
Several paths also result in proper encrypted pages being freed through
the same freeing function. Rely on free_decrypted_pages() to not leak the
memory in these cases.
Cc: Christoph Hellwig <hch@....de>
Cc: Marek Szyprowski <m.szyprowski@...sung.com>
Cc: Robin Murphy <robin.murphy@....com>
Cc: iommu@...ts.linux.dev
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@...el.com>
---
include/linux/dma-map-ops.h | 3 ++-
kernel/dma/contiguous.c | 2 +-
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
index f2fc203fb8a1..b0800cbbc357 100644
--- a/include/linux/dma-map-ops.h
+++ b/include/linux/dma-map-ops.h
@@ -9,6 +9,7 @@
#include <linux/dma-mapping.h>
#include <linux/pgtable.h>
#include <linux/slab.h>
+#include <linux/set_memory.h>
struct cma;
@@ -165,7 +166,7 @@ static inline struct page *dma_alloc_contiguous(struct device *dev, size_t size,
static inline void dma_free_contiguous(struct device *dev, struct page *page,
size_t size)
{
- __free_pages(page, get_order(size));
+ free_decrypted_pages((unsigned long)page_address(page), get_order(size));
}
#endif /* CONFIG_DMA_CMA*/
diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c
index f005c66f378c..e962f1f6434e 100644
--- a/kernel/dma/contiguous.c
+++ b/kernel/dma/contiguous.c
@@ -429,7 +429,7 @@ void dma_free_contiguous(struct device *dev, struct page *page, size_t size)
}
/* not in any cma, free from buddy */
- __free_pages(page, get_order(size));
+ free_decrypted_pages((unsigned long)page_address(page), get_order(size));
}
/*
--
2.34.1
Powered by blists - more mailing lists