[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221027052607.260234-1-aik@amd.com>
Date: Thu, 27 Oct 2022 16:26:07 +1100
From: Alexey Kardashevskiy <aik@....com>
To: <kvm@...r.kernel.org>
CC: Alexey Kardashevskiy <aik@....com>, <linux-kernel@...r.kernel.org>,
<iommu@...ts.linux.dev>, Ashish Kalra <ashish.kalra@....com>,
Pankaj Gupta <pankaj.gupta@....com>,
Tom Lendacky <thomas.lendacky@....com>,
Robin Murphy <robin.murphy@....com>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Christoph Hellwig <hch@....de>
Subject: [PATCH kernel] x86/swiotlb/amd: Half the size if allocation failed
At the moment the AMD encrypted platform reserves 6% of RAM for SWIOTLB
or 1GB, whichever is less. However it is possible that there is no block
big enough in the low memory which make SWIOTLB allocation fail and
the kernel continues without DMA. In such case a VM hangs on DMA.
This divides the size in half and tries again reusing the existing
remapping logic.
This updates default_nslabs on successful allocation which looks like
an oversight as not doing so should have broken callers of
swiotlb_size_or_default().
Signed-off-by: Alexey Kardashevskiy <aik@....com>
--
I hit the problem with
QEMU's "-m 16819M" where SWIOTLB was adjusted to
0x7e200 == 1,058,013,184 (slightly less than 1GB) while
0x7e180 still worked.
With guest errors enabled, there are many unassigned accesses from
virtio.
---
kernel/dma/swiotlb.c | 20 +++++++++++++-------
1 file changed, 13 insertions(+), 7 deletions(-)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 339a990554e7..d28c294320fd 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -338,21 +338,27 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
else
tlb = memblock_alloc_low(bytes, PAGE_SIZE);
if (!tlb) {
- pr_warn("%s: failed to allocate tlb structure\n", __func__);
- return;
- }
-
- if (remap && remap(tlb, nslabs) < 0) {
+ pr_warn("%s: Failed to allocate %zu bytes tlb structure\n",
+ __func__, bytes);
+ } else if (remap && remap(tlb, nslabs) < 0) {
memblock_free(tlb, PAGE_ALIGN(bytes));
+ tlb = NULL;
+ pr_warn("%s: Failed to remap %zu bytes\n", __func__, bytes);
+ }
+ if (!tlb) {
nslabs = ALIGN(nslabs >> 1, IO_TLB_SEGSIZE);
if (nslabs >= IO_TLB_MIN_SLABS)
goto retry;
-
- pr_warn("%s: Failed to remap %zu bytes\n", __func__, bytes);
return;
}
+ if (default_nslabs != nslabs) {
+ pr_info("SWIOTLB bounce buffer size adjusted %lu -> %lu slabs",
+ default_nslabs, nslabs);
+ default_nslabs = nslabs;
+ }
+
alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs));
mem->slots = memblock_alloc(alloc_size, PAGE_SIZE);
if (!mem->slots) {
--
2.37.3
Powered by blists - more mailing lists