[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240222001721.2269203-1-rick.p.edgecombe@intel.com>
Date: Wed, 21 Feb 2024 16:17:21 -0800
From: Rick Edgecombe <rick.p.edgecombe@...el.com>
To: hch@....de,
m.szyprowski@...sung.com,
robin.murphy@....com,
linux-kernel@...r.kernel.org
Cc: kirill.shutemov@...ux.intel.com,
elena.reshetova@...el.com,
Rick Edgecombe <rick.p.edgecombe@...el.com>,
iommu@...ts.linux.dev
Subject: [PATCH] dma: Leak pages on dma_set_decrypted() failure
On TDX it is possible for the untrusted host to cause
set_memory_encrypted() or set_memory_decrypted() to fail such that an
error is returned and the resulting memory is shared. Callers need to take
care to handle these errors to avoid returning decrypted (shared) memory to
the page allocator, which could lead to functional or security issues.
DMA could free decrypted/shared pages if dma_set_decrypted() fails. This
should be a rare case. Just leak the pages in this case instead of freeing
them.
Cc: Christoph Hellwig <hch@....de>
Cc: Marek Szyprowski <m.szyprowski@...sung.com>
Cc: Robin Murphy <robin.murphy@....com>
Cc: iommu@...ts.linux.dev
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@...el.com>
---
Shared (decrypted) pages should never return to the page allocator, or
future usage of the pages may allow for the contents to be exposed to the
host. They may also cause the guest to crash if the page is used in way
disallowed by HW (i.e. for executable code or as a page table).
Normally set_memory() call failures are rare. But on TDX
set_memory_XXcrypted() involves calls to the untrusted VMM, and an attacker
could fail these calls such that:
1. set_memory_encrypted() returns an error and leaves the pages fully
shared.
2. set_memory_decrypted() returns an error, but the pages are actually
full converted to shared.
This means that patterns like the below can cause problems:
void *addr = alloc();
int fail = set_memory_decrypted(addr, 1);
if (fail)
free_pages(addr, 0);
And:
void *addr = alloc();
int fail = set_memory_decrypted(addr, 1);
if (fail) {
set_memory_encrypted(addr, 1);
free_pages(addr, 0);
}
Unfortunately these patterns are all over the place. And what the
set_memory() callers should do in this situation is not clear either. They
shouldn’t use them as shared because something clearly went wrong, but
they also need to fully reset the pages to private to free them. But, the
kernel needs the VMMs help to do this and the VMM is already being
uncooperative around the needed operations. So this isn't guaranteed to
succeed and the caller is kind of stuck with unusable pages.
The only choice is to panic or leak the pages. The kernel tries not to
panic if at all possible, so just leak the pages at the call sites.
Separately there is a patch[0] to warn if the guest detects strange VMM
behavior around this. It is stalled, so in the mean time I’m proceeding
with fixing the callers to leak the pages. No additional warnings are
added, because the plan is to warn in a single place in x86 set_memory()
code.
[0] https://lore.kernel.org/lkml/20240122184003.129104-1-rick.p.edgecombe@intel.com/
---
---
kernel/dma/direct.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 98b2e192fd69..4d543b1e9d57 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -286,7 +286,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
} else {
ret = page_address(page);
if (dma_set_decrypted(dev, ret, size))
- goto out_free_pages;
+ goto out_leak_pages;
}
memset(ret, 0, size);
@@ -307,6 +307,8 @@ void *dma_direct_alloc(struct device *dev, size_t size,
out_free_pages:
__dma_direct_free_pages(dev, page, size);
return NULL;
+out_leak_pages:
+ return NULL;
}
void dma_direct_free(struct device *dev, size_t size,
@@ -367,12 +369,11 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
ret = page_address(page);
if (dma_set_decrypted(dev, ret, size))
- goto out_free_pages;
+ goto out_leak_pages;
memset(ret, 0, size);
*dma_handle = phys_to_dma_direct(dev, page_to_phys(page));
return page;
-out_free_pages:
- __dma_direct_free_pages(dev, page, size);
+out_leak_pages:
return NULL;
}
--
2.34.1
Powered by blists - more mailing lists