lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 19 Feb 2020 15:22:36 -0800 (PST)
From:   Liam Mark <lmark@...eaurora.org>
To:     Will Deacon <will@...nel.org>
cc:     Robin Murphy <robin.murphy@....com>,
        Joerg Roedel <joro@...tes.org>,
        "Isaac J. Manjarres" <isaacm@...eaurora.org>,
        Pratik Patel <pratikp@...eaurora.org>,
        iommu@...ts.linux-foundation.org, kernel-team@...roid.com,
        linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] iommu/iova: Support limiting IOVA alignment

On Wed, 19 Feb 2020, Will Deacon wrote:

> On Mon, Feb 17, 2020 at 04:46:14PM +0000, Robin Murphy wrote:
> > On 14/02/2020 8:30 pm, Liam Mark wrote:
> > > 
> > > When the IOVA framework applies IOVA alignment it aligns all
> > > IOVAs to the smallest PAGE_SIZE order which is greater than or
> > > equal to the requested IOVA size.
> > > 
> > > We support use cases that requires large buffers (> 64 MB in
> > > size) to be allocated and mapped in their stage 1 page tables.
> > > However, with this alignment scheme we find ourselves running
> > > out of IOVA space for 32 bit devices, so we are proposing this
> > > config, along the similar vein as CONFIG_CMA_ALIGNMENT for CMA
> > > allocations.
> > 
> > As per [1], I'd really like to better understand the allocation patterns
> > that lead to such a sparsely-occupied address space to begin with, given
> > that the rbtree allocator is supposed to try to maintain locality as far as
> > possible, and the rcaches should further improve on that. Are you also
> > frequently cycling intermediate-sized buffers which are smaller than 64MB
> > but still too big to be cached?  Are there a lot of non-power-of-two
> > allocations?
> 
> Right, information on the allocation pattern would help with this change
> and also the choice of IOVA allocation algorithm. Without it, we're just
> shooting in the dark.
> 

Thanks for the responses.

I am looking into how much of our allocation pattern details I can share.

My general understanding is that this issue occurs on a 32bit devices 
which have additional restrictions on the IOVA range they can use within those 
32bits.

An example is a use case which involves allocating a lot of buffers ~80MB 
is size, the current algorithm will require an alignment of 128MB for 
those buffers. My understanding is that it simply can't accommodate the number of 80MB 
buffers that are required because the of amount of IOVA space which can't 
be used because of the 128MB alignment requirement.

> > > Add CONFIG_IOMMU_LIMIT_IOVA_ALIGNMENT to limit the alignment of
> > > IOVAs to some desired PAGE_SIZE order, specified by
> > > CONFIG_IOMMU_IOVA_ALIGNMENT. This helps reduce the impact of
> > > fragmentation caused by the current IOVA alignment scheme, and
> > > gives better IOVA space utilization.
> > 
> > Even if the general change did prove reasonable, this IOVA allocator is not
> > owned by the DMA API, so entirely removing the option of strict
> > size-alignment feels a bit uncomfortable. Personally I'd replace the bool
> > argument with an actual alignment value to at least hand the authority out
> > to individual callers.
> > 
> > Furthermore, even in DMA API terms, is anyone really ever going to bother
> > tuning that config? Since iommu-dma is supposed to be a transparent layer,
> > arguably it shouldn't behave unnecessarily differently from CMA, so simply
> > piggy-backing off CONFIG_CMA_ALIGNMENT would seem logical.
> 
> Agreed, reusing CONFIG_CMA_ALIGNMENT makes a lot of sense here as callers
> relying on natural alignment of DMA buffer allocations already have to
> deal with that limitation. We could fix it as an optional parameter at
> init time (init_iova_domain()), and have the DMA IOMMU implementation
> pass it in there.
> 

My concern with using CONFIG_CMA_ALIGNMENT alignment is that for us this 
would either involve further fragmenting our CMA regions (moving our CMA 
max alignment from 1MB to max 2MB) or losing so of our 2MB IOVA block 
mappings (changing our IOVA max alignment form 2MB to 1MB).

At least for us CMA allocations are often not DMA mapped into stage 1 page 
tables so moving the CMA max alignment to 2MB in our case would, I think, 
only provide the disadvantage of having to increase the size our CMA 
regions to accommodate this large alignment (which isn’t optimal for 
memory utilization since CMA regions can't satisfy unmovable page 
allocations).

As an alternative would it be possible for the dma-iommu layer to use the 
size of the allocation and the domain pgsize_bitmap field to pick a max 
IOVA alignment, which it can pass in for that IOVA allocation, which will 
maximize block mappings but not waste IOVA space?

Liam

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ