[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <05f04218-7ebd-1ce3-9e0c-8bc65e5e937a@amd.com>
Date: Fri, 28 Oct 2022 12:14:27 +1100
From: Alexey Kardashevskiy <aik@....com>
To: Thadeu Lima de Souza Cascardo <cascardo@...onical.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
iommu@...ts.linux.dev, Ashish Kalra <ashish.kalra@....com>,
Pankaj Gupta <pankaj.gupta@....com>,
Tom Lendacky <thomas.lendacky@....com>,
Robin Murphy <robin.murphy@....com>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Christoph Hellwig <hch@....de>
Subject: Re: [PATCH kernel] x86/swiotlb/amd: Half the size if allocation
failed
On 28/10/22 03:46, Thadeu Lima de Souza Cascardo wrote:
> On Thu, Oct 27, 2022 at 04:26:07PM +1100, Alexey Kardashevskiy wrote:
>> At the moment the AMD encrypted platform reserves 6% of RAM for SWIOTLB
>> or 1GB, whichever is less. However it is possible that there is no block
>> big enough in the low memory which make SWIOTLB allocation fail and
>> the kernel continues without DMA. In such case a VM hangs on DMA.
>>
>> This divides the size in half and tries again reusing the existing
>> remapping logic.
>>
>> This updates default_nslabs on successful allocation which looks like
>> an oversight as not doing so should have broken callers of
>> swiotlb_size_or_default().
>>
>> Signed-off-by: Alexey Kardashevskiy <aik@....com>
>
> Should this have a
> Fixes: e998879d4fb7 ("x86,swiotlb: Adjust SWIOTLB bounce buffer size for SEV guests")
> ?
Well, the problem was there before that patch, the allocation failure
was not handled while remap failure was. e998879d4fb7 just made it
easier to see. But still worth mentioning I guess... Thanks,
>
> Cascardo.
>
>> --
>>
>> I hit the problem with
>> QEMU's "-m 16819M" where SWIOTLB was adjusted to
>> 0x7e200 == 1,058,013,184 (slightly less than 1GB) while
>> 0x7e180 still worked.
>>
>> With guest errors enabled, there are many unassigned accesses from
>> virtio.
>>
>> ---
>> kernel/dma/swiotlb.c | 20 +++++++++++++-------
>> 1 file changed, 13 insertions(+), 7 deletions(-)
>>
>> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
>> index 339a990554e7..d28c294320fd 100644
>> --- a/kernel/dma/swiotlb.c
>> +++ b/kernel/dma/swiotlb.c
>> @@ -338,21 +338,27 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
>> else
>> tlb = memblock_alloc_low(bytes, PAGE_SIZE);
>> if (!tlb) {
>> - pr_warn("%s: failed to allocate tlb structure\n", __func__);
>> - return;
>> - }
>> -
>> - if (remap && remap(tlb, nslabs) < 0) {
>> + pr_warn("%s: Failed to allocate %zu bytes tlb structure\n",
>> + __func__, bytes);
>> + } else if (remap && remap(tlb, nslabs) < 0) {
>> memblock_free(tlb, PAGE_ALIGN(bytes));
>> + tlb = NULL;
>> + pr_warn("%s: Failed to remap %zu bytes\n", __func__, bytes);
>> + }
>>
>> + if (!tlb) {
>> nslabs = ALIGN(nslabs >> 1, IO_TLB_SEGSIZE);
>> if (nslabs >= IO_TLB_MIN_SLABS)
>> goto retry;
>> -
>> - pr_warn("%s: Failed to remap %zu bytes\n", __func__, bytes);
>> return;
>> }
>>
>> + if (default_nslabs != nslabs) {
>> + pr_info("SWIOTLB bounce buffer size adjusted %lu -> %lu slabs",
>> + default_nslabs, nslabs);
>> + default_nslabs = nslabs;
>> + }
>> +
>> alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs));
>> mem->slots = memblock_alloc(alloc_size, PAGE_SIZE);
>> if (!mem->slots) {
>> --
>> 2.37.3
>>
--
Alexey
Powered by blists - more mailing lists