lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200219123704.GC19400@willie-the-truck>
Date:   Wed, 19 Feb 2020 12:37:04 +0000
From:   Will Deacon <will@...nel.org>
To:     Robin Murphy <robin.murphy@....com>
Cc:     Liam Mark <lmark@...eaurora.org>, Joerg Roedel <joro@...tes.org>,
        "Isaac J. Manjarres" <isaacm@...eaurora.org>,
        Pratik Patel <pratikp@...eaurora.org>,
        iommu@...ts.linux-foundation.org, kernel-team@...roid.com,
        linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] iommu/iova: Support limiting IOVA alignment

On Mon, Feb 17, 2020 at 04:46:14PM +0000, Robin Murphy wrote:
> On 14/02/2020 8:30 pm, Liam Mark wrote:
> > 
> > When the IOVA framework applies IOVA alignment it aligns all
> > IOVAs to the smallest PAGE_SIZE order which is greater than or
> > equal to the requested IOVA size.
> > 
> > We support use cases that requires large buffers (> 64 MB in
> > size) to be allocated and mapped in their stage 1 page tables.
> > However, with this alignment scheme we find ourselves running
> > out of IOVA space for 32 bit devices, so we are proposing this
> > config, along the similar vein as CONFIG_CMA_ALIGNMENT for CMA
> > allocations.
> 
> As per [1], I'd really like to better understand the allocation patterns
> that lead to such a sparsely-occupied address space to begin with, given
> that the rbtree allocator is supposed to try to maintain locality as far as
> possible, and the rcaches should further improve on that. Are you also
> frequently cycling intermediate-sized buffers which are smaller than 64MB
> but still too big to be cached?  Are there a lot of non-power-of-two
> allocations?

Right, information on the allocation pattern would help with this change
and also the choice of IOVA allocation algorithm. Without it, we're just
shooting in the dark.

> > Add CONFIG_IOMMU_LIMIT_IOVA_ALIGNMENT to limit the alignment of
> > IOVAs to some desired PAGE_SIZE order, specified by
> > CONFIG_IOMMU_IOVA_ALIGNMENT. This helps reduce the impact of
> > fragmentation caused by the current IOVA alignment scheme, and
> > gives better IOVA space utilization.
> 
> Even if the general change did prove reasonable, this IOVA allocator is not
> owned by the DMA API, so entirely removing the option of strict
> size-alignment feels a bit uncomfortable. Personally I'd replace the bool
> argument with an actual alignment value to at least hand the authority out
> to individual callers.
> 
> Furthermore, even in DMA API terms, is anyone really ever going to bother
> tuning that config? Since iommu-dma is supposed to be a transparent layer,
> arguably it shouldn't behave unnecessarily differently from CMA, so simply
> piggy-backing off CONFIG_CMA_ALIGNMENT would seem logical.

Agreed, reusing CONFIG_CMA_ALIGNMENT makes a lot of sense here as callers
relying on natural alignment of DMA buffer allocations already have to
deal with that limitation. We could fix it as an optional parameter at
init time (init_iova_domain()), and have the DMA IOMMU implementation
pass it in there.

Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ