lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20160713102718.GD27306@suse.de>
Date:	Wed, 13 Jul 2016 12:27:18 +0200
From:	Joerg Roedel <jroedel@...e.de>
To:	Robin Murphy <robin.murphy@....com>
Cc:	Joerg Roedel <joro@...tes.org>, iommu@...ts.linux-foundation.org,
	Vincent.Wan@....com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 16/20 v2] iommu/amd: Optimize map_sg and unmap_sg

On Tue, Jul 12, 2016 at 04:34:16PM +0100, Robin Murphy wrote:
> The boundary masks for block devices are tricky to track down through so
> many layers of indirection in the common frameworks, but there are a lot
> of 64K ones there. After some more impromptu digging into the subject
> I've finally satisfied my curiosity - it seems this restriction stems
> from the ATA DMA PRD table format, so it could perhaps still be a real
> concern for anyone using some crusty old PCI IDE card in their modern
> system.

The boundary-mask is a capability of the underlying PCI device, no? The
ATA or whatever-stack above should have no influence on it.
> 
> Indeed, I wasn't suggesting making more than one call, just that
> alloc_iova_fast() is quite likely to have to fall back to alloc_iova()
> here, so there may be some mileage in going directly to the latter, with
> the benefit of then being able to rely on find_iova() later (since you
> know for sure you allocated out of the tree rather than the caches). My
> hunch is that dma_map_sg() tends to be called for bulk data transfer
> (block devices, DRM, etc.) so is probably a less contended path compared
> to the network layer hammering dma_map_single().

Using different functions for allocation would also require special
handling in the queued-freeing code, as I have to track the allocation
then to know wheter I free it with the _fast variant or not.

> > +	mask          = dma_get_seg_boundary(dev);
> > +	boundary_size = mask + 1 ? ALIGN(mask + 1, PAGE_SIZE) >> PAGE_SHIFT :
> > +				   1UL << (BITS_PER_LONG - PAGE_SHIFT);
> 
> (mask >> PAGE_SHIFT) + 1 ?

Should make no difference unless some of the first PAGE_SHIFT bits of
mask is 0 (which shouldn't happen).



	Joerg

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ