[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20231017202505.340906-5-rick.p.edgecombe@intel.com>
Date: Tue, 17 Oct 2023 13:24:59 -0700
From: Rick Edgecombe <rick.p.edgecombe@...el.com>
To: x86@...nel.org, tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com, hpa@...or.com, luto@...nel.org,
peterz@...radead.org, kirill.shutemov@...ux.intel.com,
elena.reshetova@...el.com, isaku.yamahata@...el.com,
seanjc@...gle.com, Michael Kelley <mikelley@...rosoft.com>,
thomas.lendacky@....com, decui@...rosoft.com,
sathyanarayanan.kuppuswamy@...ux.intel.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-s390@...r.kernel.org
Cc: rick.p.edgecombe@...el.com, Christoph Hellwig <hch@....de>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Robin Murphy <robin.murphy@....com>, iommu@...ts.linux.dev
Subject: [PATCH 04/10] swiotlb: Use free_decrypted_pages()
On TDX it is possible for the untrusted host to cause
set_memory_encrypted() or set_memory_decrypted() to fail such that an
error is returned and the resulting memory is shared. Callers need to take
care to handle these errors to avoid returning decrypted (shared) memory to
the page allocator, which could lead to functional or security issues.
Swiotlb could free decrypted/shared pages if set_memory_decrypted() fails.
Use the recently added free_decrypted_pages() to avoid this.
In swiotlb_exit(), check for set_memory_encrypted() errors manually,
because the pages are not nessarily going to the page allocator.
Cc: Christoph Hellwig <hch@....de>
Cc: Marek Szyprowski <m.szyprowski@...sung.com>
Cc: Robin Murphy <robin.murphy@....com>
Cc: iommu@...ts.linux.dev
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@...el.com>
---
kernel/dma/swiotlb.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 394494a6b1f3..ad06786c4f98 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -524,6 +524,7 @@ void __init swiotlb_exit(void)
unsigned long tbl_vaddr;
size_t tbl_size, slots_size;
unsigned int area_order;
+ int ret;
if (swiotlb_force_bounce)
return;
@@ -536,17 +537,19 @@ void __init swiotlb_exit(void)
tbl_size = PAGE_ALIGN(mem->end - mem->start);
slots_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), mem->nslabs));
- set_memory_encrypted(tbl_vaddr, tbl_size >> PAGE_SHIFT);
+ ret = set_memory_encrypted(tbl_vaddr, tbl_size >> PAGE_SHIFT);
if (mem->late_alloc) {
area_order = get_order(array_size(sizeof(*mem->areas),
mem->nareas));
free_pages((unsigned long)mem->areas, area_order);
- free_pages(tbl_vaddr, get_order(tbl_size));
+ if (!ret)
+ free_pages(tbl_vaddr, get_order(tbl_size));
free_pages((unsigned long)mem->slots, get_order(slots_size));
} else {
memblock_free_late(__pa(mem->areas),
array_size(sizeof(*mem->areas), mem->nareas));
- memblock_free_late(mem->start, tbl_size);
+ if (!ret)
+ memblock_free_late(mem->start, tbl_size);
memblock_free_late(__pa(mem->slots), slots_size);
}
@@ -581,7 +584,7 @@ static struct page *alloc_dma_pages(gfp_t gfp, size_t bytes)
return page;
error:
- __free_pages(page, order);
+ free_decrypted_pages((unsigned long)vaddr, order);
return NULL;
}
--
2.34.1
Powered by blists - more mailing lists