lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
 <SN6PR02MB4157FB27CD890E9F29408926D4282@SN6PR02MB4157.namprd02.prod.outlook.com>
Date: Fri, 15 Mar 2024 02:53:10 +0000
From: Michael Kelley <mhklinux@...look.com>
To: Petr Tesarik <petrtesarik@...weicloud.com>, Christoph Hellwig
	<hch@....de>, Marek Szyprowski <m.szyprowski@...sung.com>, Robin Murphy
	<robin.murphy@....com>, Will Deacon <will@...nel.org>, Nicolin Chen
	<nicolinc@...dia.com>, "open list:DMA MAPPING HELPERS"
	<iommu@...ts.linux.dev>, open list <linux-kernel@...r.kernel.org>
CC: Roberto Sassu <roberto.sassu@...weicloud.com>, Petr Tesarik
	<petr.tesarik1@...wei-partners.com>
Subject: RE: [PATCH 1/1] swiotlb: extend buffer pre-padding to
 alloc_align_mask if necessary

From: Petr Tesarik <petrtesarik@...weicloud.com> Sent: Tuesday, March 12, 2024 6:42 AM
> 
> Allow a buffer pre-padding of up to alloc_align_mask. If the allocation
> alignment is bigger than IO_TLB_SIZE and min_align_mask covers any non-zero
> bits in the original address between IO_TLB_SIZE and alloc_align_mask,
> these bits are not preserved in the swiotlb buffer address.
> 
> To fix this case, increase the allocation size and use a larger offset
> within the allocated buffer. As a result, extra padding slots may be
> allocated before the mapping start address.
> 
> Set the orig_addr in these padding slots to INVALID_PHYS_ADDR, because they
> do not correspond to any CPU buffer and the data must never be synced.
> 
> The padding slots should be automatically released when the buffer is
> unmapped. However, swiotlb_tbl_unmap_single() takes only the address of the
> DMA buffer slot, not the first padding slot. Save the number of padding
> slots in struct io_tlb_slot and use it to adjust the slot index in
> swiotlb_release_slots(), so all allocated slots are properly freed.
> 

I've been staring at this the past two days, and have run tests with
various min_align_masks and alloc_masks against orig_addr's with
various alignments and mapping sizes.  I'm planning to run additional
tests over the weekend, but I think it's solid.  See a few comments
below.

> Fixes: 2fd4fa5d3fb5 ("swiotlb: Fix alignment checks when both allocation and DMA masks are present")
> Link: https://lore.kernel.org/linux-iommu/20240311210507.217daf8b@meshulam.tesarici.cz/
> Signed-off-by: Petr Tesarik <petr.tesarik1@...wei-partners.com>
> ---
>  kernel/dma/swiotlb.c | 34 +++++++++++++++++++++++++++++-----
>  1 file changed, 29 insertions(+), 5 deletions(-)
> 
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 86fe172b5958..8ce11abc691f 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -69,11 +69,13 @@
>   * @alloc_size:	Size of the allocated buffer.
>   * @list:	The free list describing the number of free entries available
>   *		from each index.
> + * @padding:    Number of preceding padding slots.
>   */
>  struct io_tlb_slot {
>  	phys_addr_t orig_addr;
>  	size_t alloc_size;
>  	unsigned int list;
> +	unsigned int padding;

Even without the padding field, I presume that in
64-bit builds this struct is already 24 bytes in size so as
to maintain 64-bit alignment for the orig_addr and
alloc_size fields. If that's the case, then adding the
padding field doesn't change the amount of memory
required for the slot array.  Is that correct? Both the
"list" and "padding" fields contain only small integers,
but presumably reducing their size from "int" to "short"
wouldn't help except in 32-bit builds.

>  };
> 
>  static bool swiotlb_force_bounce;
> @@ -287,6 +289,7 @@ static void swiotlb_init_io_tlb_pool(struct io_tlb_pool *mem, phys_addr_t start,
>  					 mem->nslabs - i);
>  		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
>  		mem->slots[i].alloc_size = 0;
> +		mem->slots[i].padding = 0;
>  	}
> 
>  	memset(vaddr, 0, bytes);
> @@ -1328,11 +1331,12 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
>  		unsigned long attrs)
>  {
>  	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
> -	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
> +	unsigned int offset;
>  	struct io_tlb_pool *pool;
>  	unsigned int i;
>  	int index;
>  	phys_addr_t tlb_addr;
> +	unsigned int padding;
> 
>  	if (!mem || !mem->nslabs) {
>  		dev_warn_ratelimited(dev,
> @@ -1349,6 +1353,15 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
>  		return (phys_addr_t)DMA_MAPPING_ERROR;
>  	}
> 
> +	/*
> +	 * Calculate buffer pre-padding within the allocated space. Use it to
> +	 * preserve the low bits of the original address according to device's
> +	 * min_align_mask. Limit the padding to alloc_align_mask or slot size
> +	 * (whichever is bigger); higher bits of the original address are
> +	 * preserved by selecting a suitable IO TLB slot.
> +	 */
> +	offset = orig_addr & dma_get_min_align_mask(dev) &
> +		(alloc_align_mask | (IO_TLB_SIZE - 1));
>  	index = swiotlb_find_slots(dev, orig_addr,
>  				   alloc_size + offset, alloc_align_mask, &pool);
>  	if (index == -1) {
> @@ -1364,6 +1377,12 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
>  	 * This is needed when we sync the memory.  Then we sync the buffer if
>  	 * needed.
>  	 */
> +	padding = 0;
> +	while (offset >= IO_TLB_SIZE) {
> +		pool->slots[index++].orig_addr = INVALID_PHYS_ADDR;
> +		pool->slots[index].padding = ++padding;
> +		offset -= IO_TLB_SIZE;
> +	}

Looping to fill in the padding slots seems unnecessary.
The orig_addr field should already be initialized to
INVALID_PHYS_ADDR, and the padding field should already
be initialized to zero.  The value of "padding" should be just
(offset / IO_TLB_SIZE), and it only needs to be stored in the
first non-padding slot where swiotlb_release_slots() will
find it.

FWIW, your while loop above feels a bit weird in that it
updates two different slots each time through the loop,
and the padding field of the first padding slot isn't
updated. It took me a little while to figure that out. :-)

>  	for (i = 0; i < nr_slots(alloc_size + offset); i++)
>  		pool->slots[index + i].orig_addr = slot_addr(orig_addr, i);
>  	tlb_addr = slot_addr(pool->start, index) + offset;
> @@ -1385,12 +1404,16 @@ static void swiotlb_release_slots(struct device *dev, phys_addr_t tlb_addr)
>  	struct io_tlb_pool *mem = swiotlb_find_pool(dev, tlb_addr);
>  	unsigned long flags;
>  	unsigned int offset = swiotlb_align_offset(dev, tlb_addr);
> -	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
> -	int nslots = nr_slots(mem->slots[index].alloc_size + offset);
> -	int aindex = index / mem->area_nslabs;
> -	struct io_tlb_area *area = &mem->areas[aindex];
> +	int index, nslots, aindex;
> +	struct io_tlb_area *area;
>  	int count, i;
> 
> +	index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
> +	index -= mem->slots[index].padding;
> +	nslots = nr_slots(mem->slots[index].alloc_size + offset);
> +	aindex = index / mem->area_nslabs;
> +	area = &mem->areas[aindex];
> +
>  	/*
>  	 * Return the buffer to the free list by setting the corresponding
>  	 * entries to indicate the number of contiguous entries available.
> @@ -1413,6 +1436,7 @@ static void swiotlb_release_slots(struct device *dev, phys_addr_t tlb_addr)
>  		mem->slots[i].list = ++count;
>  		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
>  		mem->slots[i].alloc_size = 0;
> +		mem->slots[i].padding = 0;
>  	}
> 
>  	/*
> --
> 2.34.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ