lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <07270e29-c9d4-ae8c-a236-eb6fefccbf6c@arm.com>
Date:   Fri, 18 Sep 2020 15:18:04 +0100
From:   Robin Murphy <robin.murphy@....com>
To:     vjitta@...eaurora.org, joro@...tes.org,
        iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org
Cc:     vinmenon@...eaurora.org, kernel-team@...roid.com
Subject: Re: [PATCH v2 1/2] iommu/iova: Retry from last rb tree node if iova
 search fails

On 2020-08-20 13:49, vjitta@...eaurora.org wrote:
> From: Vijayanand Jitta <vjitta@...eaurora.org>
> 
> When ever a new iova alloc request comes iova is always searched
> from the cached node and the nodes which are previous to cached
> node. So, even if there is free iova space available in the nodes
> which are next to the cached node iova allocation can still fail
> because of this approach.
> 
> Consider the following sequence of iova alloc and frees on
> 1GB of iova space
> 
> 1) alloc - 500MB
> 2) alloc - 12MB
> 3) alloc - 499MB
> 4) free -  12MB which was allocated in step 2
> 5) alloc - 13MB
> 
> After the above sequence we will have 12MB of free iova space and
> cached node will be pointing to the iova pfn of last alloc of 13MB
> which will be the lowest iova pfn of that iova space. Now if we get an
> alloc request of 2MB we just search from cached node and then look
> for lower iova pfn's for free iova and as they aren't any, iova alloc
> fails though there is 12MB of free iova space.
> 
> To avoid such iova search failures do a retry from the last rb tree node
> when iova search fails, this will search the entire tree and get an iova
> if its available.
> 
> Signed-off-by: Vijayanand Jitta <vjitta@...eaurora.org>
> ---
>   drivers/iommu/iova.c | 23 +++++++++++++++++------
>   1 file changed, 17 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index 49fc01f..4e77116 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -184,8 +184,9 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
>   	struct rb_node *curr, *prev;
>   	struct iova *curr_iova;
>   	unsigned long flags;
> -	unsigned long new_pfn;
> +	unsigned long new_pfn, low_pfn_new;
>   	unsigned long align_mask = ~0UL;
> +	unsigned long high_pfn = limit_pfn, low_pfn = iovad->start_pfn;
>   
>   	if (size_aligned)
>   		align_mask <<= fls_long(size - 1);
> @@ -198,15 +199,25 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
>   
>   	curr = __get_cached_rbnode(iovad, limit_pfn);
>   	curr_iova = rb_entry(curr, struct iova, node);
> +	low_pfn_new = curr_iova->pfn_hi + 1;

Could we call "low_pfn_new" something like "retry_pfn" instead? This 
code already has unavoidable readability struggles with so many 
different "pfn"s in play, so having two different meanings of "new" 
really doesn't help.

Other than that, I think this looks OK (IIRC it's basically what I 
originally suggested), so with the naming tweaked,

Reviewed-by: Robin Murphy <robin.murphy@....com>

> +
> +retry:
>   	do {
> -		limit_pfn = min(limit_pfn, curr_iova->pfn_lo);
> -		new_pfn = (limit_pfn - size) & align_mask;
> +		high_pfn = min(high_pfn, curr_iova->pfn_lo);
> +		new_pfn = (high_pfn - size) & align_mask;
>   		prev = curr;
>   		curr = rb_prev(curr);
>   		curr_iova = rb_entry(curr, struct iova, node);
> -	} while (curr && new_pfn <= curr_iova->pfn_hi);
> -
> -	if (limit_pfn < size || new_pfn < iovad->start_pfn) {
> +	} while (curr && new_pfn <= curr_iova->pfn_hi && new_pfn >= low_pfn);
> +
> +	if (high_pfn < size || new_pfn < low_pfn) {
> +		if (low_pfn == iovad->start_pfn && low_pfn_new < limit_pfn) {
> +			high_pfn = limit_pfn;
> +			low_pfn = low_pfn_new;
> +			curr = &iovad->anchor.node;
> +			curr_iova = rb_entry(curr, struct iova, node);
> +			goto retry;
> +		}
>   		iovad->max32_alloc_size = size;
>   		goto iova32_full;
>   	}
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ