[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20231026095123.222-1-petrtesarik@huaweicloud.com>
Date: Thu, 26 Oct 2023 11:51:23 +0200
From: Petr Tesarik <petrtesarik@...weicloud.com>
To: Christoph Hellwig <hch@....de>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Robin Murphy <robin.murphy@....com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Petr Tesarik <petr.tesarik1@...wei-partners.com>,
iommu@...ts.linux.dev (open list:DMA MAPPING HELPERS),
linux-kernel@...r.kernel.org (open list)
Cc: Wangkefeng <wangkefeng.wang@...wei.com>,
Roberto Sassu <roberto.sassu@...weicloud.com>, petr@...arici.cz
Subject: [PATCH] swiotlb: check dynamically allocated TLB address before decrypting
From: Petr Tesarik <petr.tesarik1@...wei-partners.com>
Do not decrypt a dynamically allocated TLB area until its physical address
is known to be below the requested limit. Currently, pages are allocated
and decrypted, but then they may be freed while still decrypted if
swiotlb_alloc_tlb() determines that the physical address is too high.
Let the caller differentiate between unsuitable physical address (=> retry
from a lower zone) and allocation failures (=> no point in retrying).
Fixes: 79636caad361 ("swiotlb: if swiotlb is full, fall back to a transient memory pool")
Signed-off-by: Petr Tesarik <petr.tesarik1@...wei-partners.com>
---
kernel/dma/swiotlb.c | 20 +++++++++++---------
1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index dff067bd56b1..d1118f6f61b8 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -558,30 +558,36 @@ void __init swiotlb_exit(void)
* alloc_dma_pages() - allocate pages to be used for DMA
* @gfp: GFP flags for the allocation.
* @bytes: Size of the buffer.
+ * @phys_limit: Maximum allowed physical address of the buffer.
*
* Allocate pages from the buddy allocator. If successful, make the allocated
* pages decrypted that they can be used for DMA.
*
- * Return: Decrypted pages, or %NULL on failure.
+ * Return: Decrypted pages, %NULL on allocation failure, or ERR_PTR(-EAGAIN)
+ * if the allocated physical address was above @phys_limit.
*/
-static struct page *alloc_dma_pages(gfp_t gfp, size_t bytes)
+static struct page *alloc_dma_pages(gfp_t gfp, size_t bytes, u64 phys_limit)
{
unsigned int order = get_order(bytes);
struct page *page;
+ phys_addr_t paddr;
void *vaddr;
page = alloc_pages(gfp, order);
if (!page)
return NULL;
- vaddr = page_address(page);
+ paddr = page_to_phys(page);
+ if (paddr + bytes - 1 > phys_limit)
+ goto error;
+ vaddr = phys_to_virt(paddr);
if (set_memory_decrypted((unsigned long)vaddr, PFN_UP(bytes)))
goto error;
return page;
error:
__free_pages(page, order);
- return NULL;
+ return ERR_PTR(-EAGAIN);
}
/**
@@ -618,11 +624,7 @@ static struct page *swiotlb_alloc_tlb(struct device *dev, size_t bytes,
else if (phys_limit <= DMA_BIT_MASK(32))
gfp |= __GFP_DMA32;
- while ((page = alloc_dma_pages(gfp, bytes)) &&
- page_to_phys(page) + bytes - 1 > phys_limit) {
- /* allocated, but too high */
- __free_pages(page, get_order(bytes));
-
+ while (IS_ERR(page = alloc_dma_pages(gfp, bytes, phys_limit))) {
if (IS_ENABLED(CONFIG_ZONE_DMA32) &&
phys_limit < DMA_BIT_MASK(64) &&
!(gfp & (__GFP_DMA32 | __GFP_DMA)))
--
2.42.0
Powered by blists - more mailing lists