[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d1410850-56f9-f085-8889-8e5a12d5ed63@amd.com>
Date: Fri, 28 Oct 2022 09:13:42 -0500
From: Tom Lendacky <thomas.lendacky@....com>
To: Alexey Kardashevskiy <aik@....com>, kvm@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, iommu@...ts.linux.dev,
Ashish Kalra <ashish.kalra@....com>,
Pankaj Gupta <pankaj.gupta@....com>,
Robin Murphy <robin.murphy@....com>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Christoph Hellwig <hch@....de>
Subject: Re: [PATCH kernel] x86/swiotlb/amd: Half the size if allocation
failed
On 10/27/22 00:26, Alexey Kardashevskiy wrote:
> At the moment the AMD encrypted platform reserves 6% of RAM for SWIOTLB
> or 1GB, whichever is less. However it is possible that there is no block
> big enough in the low memory which make SWIOTLB allocation fail and
> the kernel continues without DMA. In such case a VM hangs on DMA.
>
> This divides the size in half and tries again reusing the existing
> remapping logic.
>
> This updates default_nslabs on successful allocation which looks like
> an oversight as not doing so should have broken callers of
> swiotlb_size_or_default().
>
> Signed-off-by: Alexey Kardashevskiy <aik@....com>
Reviewed-by: Tom Lendacky <thomas.lendacky@....com>
> --
>
> I hit the problem with
> QEMU's "-m 16819M" where SWIOTLB was adjusted to
> 0x7e200 == 1,058,013,184 (slightly less than 1GB) while
> 0x7e180 still worked.
>
> With guest errors enabled, there are many unassigned accesses from
> virtio.
>
> ---
> kernel/dma/swiotlb.c | 20 +++++++++++++-------
> 1 file changed, 13 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 339a990554e7..d28c294320fd 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -338,21 +338,27 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
> else
> tlb = memblock_alloc_low(bytes, PAGE_SIZE);
> if (!tlb) {
> - pr_warn("%s: failed to allocate tlb structure\n", __func__);
> - return;
> - }
> -
> - if (remap && remap(tlb, nslabs) < 0) {
> + pr_warn("%s: Failed to allocate %zu bytes tlb structure\n",
> + __func__, bytes);
> + } else if (remap && remap(tlb, nslabs) < 0) {
> memblock_free(tlb, PAGE_ALIGN(bytes));
> + tlb = NULL;
> + pr_warn("%s: Failed to remap %zu bytes\n", __func__, bytes);
> + }
>
> + if (!tlb) {
> nslabs = ALIGN(nslabs >> 1, IO_TLB_SEGSIZE);
> if (nslabs >= IO_TLB_MIN_SLABS)
> goto retry;
> -
> - pr_warn("%s: Failed to remap %zu bytes\n", __func__, bytes);
> return;
> }
>
> + if (default_nslabs != nslabs) {
> + pr_info("SWIOTLB bounce buffer size adjusted %lu -> %lu slabs",
> + default_nslabs, nslabs);
> + default_nslabs = nslabs;
> + }
> +
> alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs));
> mem->slots = memblock_alloc(alloc_size, PAGE_SIZE);
> if (!mem->slots) {
Powered by blists - more mailing lists