lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e9ae618c-58d4-d245-be80-e62fbde4f907@arm.com>
Date:   Mon, 17 Feb 2020 16:46:14 +0000
From:   Robin Murphy <robin.murphy@....com>
To:     Liam Mark <lmark@...eaurora.org>, Joerg Roedel <joro@...tes.org>
Cc:     "Isaac J. Manjarres" <isaacm@...eaurora.org>,
        Pratik Patel <pratikp@...eaurora.org>,
        iommu@...ts.linux-foundation.org, kernel-team@...roid.com,
        linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] iommu/iova: Support limiting IOVA alignment

On 14/02/2020 8:30 pm, Liam Mark wrote:
> 
> When the IOVA framework applies IOVA alignment it aligns all
> IOVAs to the smallest PAGE_SIZE order which is greater than or
> equal to the requested IOVA size.
> 
> We support use cases that requires large buffers (> 64 MB in
> size) to be allocated and mapped in their stage 1 page tables.
> However, with this alignment scheme we find ourselves running
> out of IOVA space for 32 bit devices, so we are proposing this
> config, along the similar vein as CONFIG_CMA_ALIGNMENT for CMA
> allocations.

As per [1], I'd really like to better understand the allocation patterns 
that lead to such a sparsely-occupied address space to begin with, given 
that the rbtree allocator is supposed to try to maintain locality as far 
as possible, and the rcaches should further improve on that. Are you 
also frequently cycling intermediate-sized buffers which are smaller 
than 64MB but still too big to be cached?  Are there a lot of 
non-power-of-two allocations?

> Add CONFIG_IOMMU_LIMIT_IOVA_ALIGNMENT to limit the alignment of
> IOVAs to some desired PAGE_SIZE order, specified by
> CONFIG_IOMMU_IOVA_ALIGNMENT. This helps reduce the impact of
> fragmentation caused by the current IOVA alignment scheme, and
> gives better IOVA space utilization.

Even if the general change did prove reasonable, this IOVA allocator is 
not owned by the DMA API, so entirely removing the option of strict 
size-alignment feels a bit uncomfortable. Personally I'd replace the 
bool argument with an actual alignment value to at least hand the 
authority out to individual callers.

Furthermore, even in DMA API terms, is anyone really ever going to 
bother tuning that config? Since iommu-dma is supposed to be a 
transparent layer, arguably it shouldn't behave unnecessarily 
differently from CMA, so simply piggy-backing off CONFIG_CMA_ALIGNMENT 
would seem logical.

Robin.

[1] 
https://lore.kernel.org/linux-iommu/1581721602-17010-1-git-send-email-isaacm@codeaurora.org/

> Signed-off-by: Liam Mark <lmark@...eaurora.org>
> ---
>   drivers/iommu/Kconfig | 31 +++++++++++++++++++++++++++++++
>   drivers/iommu/iova.c  | 20 +++++++++++++++++++-
>   2 files changed, 50 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index d2fade984999..9684a153cc72 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -3,6 +3,37 @@
>   config IOMMU_IOVA
>   	tristate
>   
> +if IOMMU_IOVA
> +
> +config IOMMU_LIMIT_IOVA_ALIGNMENT
> +	bool "Limit IOVA alignment"
> +	help
> +	  When the IOVA framework applies IOVA alignment it aligns all
> +	  IOVAs to the smallest PAGE_SIZE order which is greater than or
> +	  equal to the requested IOVA size. This works fine for sizes up
> +	  to several MiB, but for larger sizes it results in address
> +	  space wastage and fragmentation. For example drivers with a 4
> +	  GiB IOVA space might run out of IOVA space when allocating
> +	  buffers great than 64 MiB.
> +
> +	  Enable this option to impose a limit on the alignment of IOVAs.
> +
> +	  If unsure, say N.
> +
> +config IOMMU_IOVA_ALIGNMENT
> +	int "Maximum PAGE_SIZE order of alignment for IOVAs"
> +	depends on IOMMU_LIMIT_IOVA_ALIGNMENT
> +	range 4 9
> +	default 9
> +	help
> +	  With this parameter you can specify the maximum PAGE_SIZE order for
> +	  IOVAs. Larger IOVAs will be aligned only to this specified order.
> +	  The order is expressed a power of two multiplied by the PAGE_SIZE.
> +
> +	  If unsure, leave the default value "9".
> +
> +endif
> +
>   # The IOASID library may also be used by non-IOMMU_API users
>   config IOASID
>   	tristate
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index 0e6a9536eca6..259884c8dbd1 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -177,6 +177,24 @@ int init_iova_flush_queue(struct iova_domain *iovad,
>   	rb_insert_color(&iova->node, root);
>   }
>   
> +#ifdef CONFIG_IOMMU_LIMIT_IOVA_ALIGNMENT
> +static unsigned long limit_align_shift(struct iova_domain *iovad,
> +				       unsigned long shift)
> +{
> +	unsigned long max_align_shift;
> +
> +	max_align_shift = CONFIG_IOMMU_IOVA_ALIGNMENT + PAGE_SHIFT
> +			- iova_shift(iovad);
> +	return min_t(unsigned long, max_align_shift, shift);
> +}
> +#else
> +static unsigned long limit_align_shift(struct iova_domain *iovad,
> +				       unsigned long shift)
> +{
> +	return shift;
> +}
> +#endif
> +
>   static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
>   		unsigned long size, unsigned long limit_pfn,
>   			struct iova *new, bool size_aligned)
> @@ -188,7 +206,7 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
>   	unsigned long align_mask = ~0UL;
>   
>   	if (size_aligned)
> -		align_mask <<= fls_long(size - 1);
> +		align_mask <<= limit_align_shift(iovad, fls_long(size - 1));
>   
>   	/* Walk the tree backwards */
>   	spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ