lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241104091048.GA25041@lst.de>
Date: Mon, 4 Nov 2024 10:10:48 +0100
From: Christoph Hellwig <hch@....de>
To: Robin Murphy <robin.murphy@....com>
Cc: Leon Romanovsky <leon@...nel.org>, Jens Axboe <axboe@...nel.dk>,
	Jason Gunthorpe <jgg@...pe.ca>, Joerg Roedel <joro@...tes.org>,
	Will Deacon <will@...nel.org>, Christoph Hellwig <hch@....de>,
	Sagi Grimberg <sagi@...mberg.me>,
	Leon Romanovsky <leonro@...dia.com>,
	Keith Busch <kbusch@...nel.org>,
	Bjorn Helgaas <bhelgaas@...gle.com>,
	Logan Gunthorpe <logang@...tatee.com>,
	Yishai Hadas <yishaih@...dia.com>,
	Shameer Kolothum <shameerali.kolothum.thodi@...wei.com>,
	Kevin Tian <kevin.tian@...el.com>,
	Alex Williamson <alex.williamson@...hat.com>,
	Marek Szyprowski <m.szyprowski@...sung.com>,
	Jérôme Glisse <jglisse@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Jonathan Corbet <corbet@....net>, linux-doc@...r.kernel.org,
	linux-kernel@...r.kernel.org, linux-block@...r.kernel.org,
	linux-rdma@...r.kernel.org, iommu@...ts.linux.dev,
	linux-nvme@...ts.infradead.org, linux-pci@...r.kernel.org,
	kvm@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v1 07/17] dma-mapping: Implement link/unlink ranges API

On Thu, Oct 31, 2024 at 09:18:07PM +0000, Robin Murphy wrote:
>>   +static int __dma_iova_link(struct device *dev, dma_addr_t addr,
>> +		phys_addr_t phys, size_t size, enum dma_data_direction dir,
>> +		unsigned long attrs)
>> +{
>> +	bool coherent = dev_is_dma_coherent(dev);
>> +
>> +	if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
>
> If you really imagine this can support non-coherent operation and 
> DMA_ATTR_SKIP_CPU_SYNC, where are the corresponding explicit sync 
> operations? dma_sync_single_*() sure as heck aren't going to work...
>
> In fact, same goes for SWIOTLB bouncing even in the coherent case.

No with explicit sync operations.  But plain map/unmap works, I've
actually verified that with nvme.  And that's a pretty large use
case.

>> +		arch_sync_dma_for_device(phys, size, dir);
>
> Plus if the aim is to pass P2P and whatever arbitrary physical addresses 
> through here as well, how can we be sure this isn't going to explode?

That's a good point.  Only mapped through host bridge P2P can even
end up here, so the address is a perfectly valid physical address
in the host.  But I'm not sure if all arch_sync_dma_for_device
implementations handle IOMMU memory fine.

>> +	struct iommu_domain *domain = iommu_get_dma_domain(dev);
>> +	struct iommu_dma_cookie *cookie = domain->iova_cookie;
>> +	struct iova_domain *iovad = &cookie->iovad;
>> +	size_t iova_start_pad = iova_offset(iovad, phys);
>> +	size_t iova_end_pad = iova_offset(iovad, phys + size);
>
> I thought the code below was wrong until I double-checked and realised that 
> this is not what its name implies it to be...

Which variable does this refer to, and what would be a better name?

>> +		phys = iommu_iova_to_phys(domain, addr);
>> +		if (WARN_ON(!phys))
>> +			continue;
>> +		len = min_t(size_t,
>> +			end - addr, iovad->granule - iova_start_pad);
>> +
>> +		if (!dev_is_dma_coherent(dev) &&
>> +		    !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
>> +			arch_sync_dma_for_cpu(phys, len, dir);
>> +
>> +		swiotlb_tbl_unmap_single(dev, phys, len, dir, attrs);
>
> How do you know that "phys" and "len" match what was originally allocated 
> and bounced in, and this isn't going to try to bounce out too much, free 
> the wrong slot, or anything else nasty? If it's not supposed to be 
> intentional that a sub-granule buffer can be linked to any offset in the 
> middle of the IOVA range as long as its original physical address is 
> aligned to the IOVA granule size(?), why try to bounce anywhere other than 
> the ends of the range at all?

Mostly because the code is simpler and unless misused it just works.
But it might be worth adding explicit checks for the start and end.

>> +static void __iommu_dma_iova_unlink(struct device *dev,
>> +		struct dma_iova_state *state, size_t offset, size_t size,
>> +		enum dma_data_direction dir, unsigned long attrs,
>> +		bool free_iova)
>> +{
>> +	struct iommu_domain *domain = iommu_get_dma_domain(dev);
>> +	struct iommu_dma_cookie *cookie = domain->iova_cookie;
>> +	struct iova_domain *iovad = &cookie->iovad;
>> +	dma_addr_t addr = state->addr + offset;
>> +	size_t iova_start_pad = iova_offset(iovad, addr);
>> +	struct iommu_iotlb_gather iotlb_gather;
>> +	size_t unmapped;
>> +
>> +	if ((state->__size & DMA_IOVA_USE_SWIOTLB) ||
>> +	    (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)))
>> +		iommu_dma_iova_unlink_range_slow(dev, addr, size, dir, attrs);
>> +
>> +	iommu_iotlb_gather_init(&iotlb_gather);
>> +	iotlb_gather.queued = free_iova && READ_ONCE(cookie->fq_domain);
>
> Is is really worth the bother?

Worth what?

>> +	size = iova_align(iovad, size + iova_start_pad);
>> +	addr -= iova_start_pad;
>> +	unmapped = iommu_unmap_fast(domain, addr, size, &iotlb_gather);
>> +	WARN_ON(unmapped != size);
>> +
>> +	if (!iotlb_gather.queued)
>> +		iommu_iotlb_sync(domain, &iotlb_gather);
>> +	if (free_iova)
>> +		iommu_dma_free_iova(cookie, addr, size, &iotlb_gather);
>
> There's no guarantee that "size" is the correct value here, so this has 
> every chance of corrupting the IOVA domain.

Yes, but the same is true for every users of the iommu_* API as well.

>> +/**
>> + * dma_iova_unlink - Unlink a range of IOVA space
>> + * @dev: DMA device
>> + * @state: IOVA state
>> + * @offset: offset into the IOVA state to unlink
>> + * @size: size of the buffer
>> + * @dir: DMA direction
>> + * @attrs: attributes of mapping properties
>> + *
>> + * Unlink a range of IOVA space for the given IOVA state.
>
> If I initially link a large range in one go, then unlink a small part of 
> it, what behaviour can I expect?

As in map say 128k and then unmap 4k?  It will just work, even if that
is not the intended use case, which is either map everything up front
and unmap everything together, or the HMM version of random constant
mapping and unmapping at page size granularity.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ