lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 21 Feb 2024 23:37:05 +0000
From: Michael Kelley <mhklinux@...look.com>
To: Will Deacon <will@...nel.org>, "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>
CC: "kernel-team@...roid.com" <kernel-team@...roid.com>,
	"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>, Christoph Hellwig
	<hch@....de>, Marek Szyprowski <m.szyprowski@...sung.com>, Robin Murphy
	<robin.murphy@....com>, Petr Tesarik <petr.tesarik1@...wei-partners.com>,
	Dexuan Cui <decui@...rosoft.com>, Nicolin Chen <nicolinc@...dia.com>
Subject: RE: [PATCH v4 4/5] swiotlb: Fix alignment checks when both allocation
 and DMA masks are present

From: Will Deacon <will@...nel.org> Sent: Wednesday, February 21, 2024 3:35 AM
> 
> Nicolin reports that swiotlb buffer allocations fail for an NVME device
> behind an IOMMU using 64KiB pages. This is because we end up with a
> minimum allocation alignment of 64KiB (for the IOMMU to map the buffer
> safely) but a minimum DMA alignment mask corresponding to a 4KiB NVME
> page (i.e. preserving the 4KiB page offset from the original allocation).
> If the original address is not 4KiB-aligned, the allocation will fail
> because swiotlb_search_pool_area() erroneously compares these unmasked
> bits with the 64KiB-aligned candidate allocation.
> 
> Tweak swiotlb_search_pool_area() so that the DMA alignment mask is
> reduced based on the required alignment of the allocation.
> 
> Fixes: 82612d66d51d ("iommu: Allow the dma-iommu api to use bounce buffers")
> Reported-by: Nicolin Chen <nicolinc@...dia.com>
> Link: https://lore.kernel.org/all/cover.1707851466.git.nicolinc@nvidia.com/
> Signed-off-by: Will Deacon <will@...nel.org>
> ---
>  kernel/dma/swiotlb.c | 11 +++++++++--
>  1 file changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index c20324fba814..c381a7ed718f 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -981,8 +981,7 @@ static int swiotlb_search_pool_area(struct device
> *dev, struct io_tlb_pool *pool
>  	dma_addr_t tbl_dma_addr =
>  		phys_to_dma_unencrypted(dev, pool->start) & boundary_mask;
>  	unsigned long max_slots = get_max_slots(boundary_mask);
> -	unsigned int iotlb_align_mask =
> -		dma_get_min_align_mask(dev) & ~(IO_TLB_SIZE - 1);
> +	unsigned int iotlb_align_mask = dma_get_min_align_mask(dev);
>  	unsigned int nslots = nr_slots(alloc_size), stride;
>  	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
>  	unsigned int index, slots_checked, count = 0, i;
> @@ -993,6 +992,14 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool
>  	BUG_ON(!nslots);
>  	BUG_ON(area_index >= pool->nareas);
> 
> +	/*
> +	 * Ensure that the allocation is at least slot-aligned and update
> +	 * 'iotlb_align_mask' to ignore bits that will be preserved when
> +	 * offsetting into the allocation.
> +	 */
> +	alloc_align_mask |= (IO_TLB_SIZE - 1);
> +	iotlb_align_mask &= ~alloc_align_mask;
> +
>  	/*
>  	 * For mappings with an alignment requirement don't bother looping to
>  	 * unaligned slots once we found an aligned one.
> --
> 2.44.0.rc0.258.g7320e95886-goog

Reviewed-by: Michael Kelley <mhklinux@...look.com>

But see my comments in Patch 1 of the series about whether this
should be folded into Patch 1.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ