lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <15374225-f136-4c42-cf6c-f587b654a526@amd.com>
Date:   Mon, 31 Oct 2022 20:18:31 +0100
From:   "Gupta, Pankaj" <pankaj.gupta@....com>
To:     Alexey Kardashevskiy <aik@....com>, kvm@...r.kernel.org
Cc:     linux-kernel@...r.kernel.org, iommu@...ts.linux.dev,
        Robin Murphy <robin.murphy@....com>,
        Marek Szyprowski <m.szyprowski@...sung.com>,
        Christoph Hellwig <hch@....de>,
        Ashish Kalra <ashish.kalra@....com>,
        Tom Lendacky <thomas.lendacky@....com>
Subject: Re: [PATCH kernel v2] swiotlb: Half the size if allocation failed

On 10/31/2022 9:13 AM, Alexey Kardashevskiy wrote:
> At the moment the AMD encrypted platform reserves 6% of RAM for SWIOTLB
> or 1GB, whichever is less. However it is possible that there is no block
> big enough in the low memory which make SWIOTLB allocation fail and
> the kernel continues without DMA. In such case a VM hangs on DMA.
> 
> This moves alloc+remap to a helper and calls it from a loop where
> the size is halved on each iteration.
> 
> This updates default_nslabs on successful allocation which looks like
> an oversight as not doing so should have broken callers of
> swiotlb_size_or_default().
> 
> Signed-off-by: Alexey Kardashevskiy <aik@....com>
> --
> Changes:
> v2:
> * moved alloc+remap to a helper as suggested
> * removed "x86" and "amd" from subj
> 
> --
> I hit the problem with
> QEMU's "-m 16819M" where SWIOTLB was adjusted to
> 0x7e200 == 1,058,013,184 (slightly less than 1GB) while
> 0x7e180 still worked.
> 
> With guest errors enabled, there are many unassigned accesses from
> virtio.
> ---
>   kernel/dma/swiotlb.c | 66 +++++++++++++++++++++++++++-----------------
>   1 file changed, 41 insertions(+), 25 deletions(-)
> 
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 339a990554e7..53fc6e7d3aa5 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -300,6 +300,36 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
>   	return;
>   }
>   
> +static void *swiotlb_memblock_alloc(unsigned long nslabs, unsigned int flags,
> +				    int (*remap)(void *tlb, unsigned long nslabs))
> +{
> +	size_t bytes = PAGE_ALIGN(nslabs << IO_TLB_SHIFT);
> +	void *tlb;
> +
> +	/*
> +	 * By default allocate the bounce buffer memory from low memory, but
> +	 * allow to pick a location everywhere for hypervisors with guest
> +	 * memory encryption.
> +	 */
> +	if (flags & SWIOTLB_ANY)
> +		tlb = memblock_alloc(bytes, PAGE_SIZE);
> +	else
> +		tlb = memblock_alloc_low(bytes, PAGE_SIZE);
> +
> +	if (!tlb) {
> +		pr_warn("%s: Failed to allocate %zu bytes tlb structure\n", __func__, bytes);
> +		return NULL;
> +	}
> +
> +	if (remap && remap(tlb, nslabs) < 0) {
> +		memblock_free(tlb, PAGE_ALIGN(bytes));
> +		pr_warn("%s: Failed to remap %zu bytes\n", __func__, bytes);
> +		return NULL;
> +	}
> +
> +	return tlb;
> +}
> +
>   /*
>    * Statically reserve bounce buffer space and initialize bounce buffer data
>    * structures for the software IO TLB used to implement the DMA API.
> @@ -310,7 +340,6 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
>   	struct io_tlb_mem *mem = &io_tlb_default_mem;
>   	unsigned long nslabs;
>   	size_t alloc_size;
> -	size_t bytes;
>   	void *tlb;
>   
>   	if (!addressing_limit && !swiotlb_force_bounce)
> @@ -325,32 +354,19 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
>   	if (!default_nareas)
>   		swiotlb_adjust_nareas(num_possible_cpus());
>   
> -	nslabs = default_nslabs;
> -	/*
> -	 * By default allocate the bounce buffer memory from low memory, but
> -	 * allow to pick a location everywhere for hypervisors with guest
> -	 * memory encryption.
> -	 */
> -retry:
> -	bytes = PAGE_ALIGN(nslabs << IO_TLB_SHIFT);
> -	if (flags & SWIOTLB_ANY)
> -		tlb = memblock_alloc(bytes, PAGE_SIZE);
> -	else
> -		tlb = memblock_alloc_low(bytes, PAGE_SIZE);
> -	if (!tlb) {
> -		pr_warn("%s: failed to allocate tlb structure\n", __func__);
> -		return;
> -	}
> -
> -	if (remap && remap(tlb, nslabs) < 0) {
> -		memblock_free(tlb, PAGE_ALIGN(bytes));
> -
> +	for (nslabs = default_nslabs;; ) {
> +		tlb = swiotlb_memblock_alloc(nslabs, flags, remap);
> +		if (tlb)
> +			break;
> +		if (nslabs <= IO_TLB_MIN_SLABS)
> +			return;
>   		nslabs = ALIGN(nslabs >> 1, IO_TLB_SEGSIZE);
> -		if (nslabs >= IO_TLB_MIN_SLABS)
> -			goto retry;
> +	}
>   
> -		pr_warn("%s: Failed to remap %zu bytes\n", __func__, bytes);
> -		return;
> +	if (default_nslabs != nslabs) {
> +		pr_info("SWIOTLB bounce buffer size adjusted %lu -> %lu slabs",
> +			default_nslabs, nslabs);
> +		default_nslabs = nslabs;
>   	}
>   
>   	alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs));

With memblock contiguous allocation under 4G memory fallback to lower 
order allocation, seems to fix the inconsistent state issue if the 
buffer is not at all allocated.

Feel free to add:
Reviewed-by: Pankaj Gupta <pankaj.gupta@....com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ