[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <00e03593-cc31-4af5-470f-da717781fa9b@codeaurora.org>
Date: Tue, 26 May 2020 10:48:58 +0530
From: Vijayanand Jitta <vjitta@...eaurora.org>
To: joro@...tes.org, iommu@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org
Cc: robin.murphy@....com, ajaynumb@...il.com, vinmenon@...eaurora.org,
kernel-team@...roid.com
Subject: Re: [PATCH v2] iommu/iova: Retry from last rb tree node if iova
search fails
On 5/11/2020 4:34 PM, vjitta@...eaurora.org wrote:
> From: Vijayanand Jitta <vjitta@...eaurora.org>
>
> When ever a new iova alloc request comes iova is always searched
> from the cached node and the nodes which are previous to cached
> node. So, even if there is free iova space available in the nodes
> which are next to the cached node iova allocation can still fail
> because of this approach.
>
> Consider the following sequence of iova alloc and frees on
> 1GB of iova space
>
> 1) alloc - 500MB
> 2) alloc - 12MB
> 3) alloc - 499MB
> 4) free - 12MB which was allocated in step 2
> 5) alloc - 13MB
>
> After the above sequence we will have 12MB of free iova space and
> cached node will be pointing to the iova pfn of last alloc of 13MB
> which will be the lowest iova pfn of that iova space. Now if we get an
> alloc request of 2MB we just search from cached node and then look
> for lower iova pfn's for free iova and as they aren't any, iova alloc
> fails though there is 12MB of free iova space.
>
> To avoid such iova search failures do a retry from the last rb tree node
> when iova search fails, this will search the entire tree and get an iova
> if its available
>
> Signed-off-by: Vijayanand Jitta <vjitta@...eaurora.org>
> ---
> drivers/iommu/iova.c | 19 +++++++++++++++----
> 1 file changed, 15 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index 0e6a953..7d82afc 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -184,8 +184,9 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
> struct rb_node *curr, *prev;
> struct iova *curr_iova;
> unsigned long flags;
> - unsigned long new_pfn;
> + unsigned long new_pfn, alloc_lo_new;
> unsigned long align_mask = ~0UL;
> + unsigned long alloc_hi = limit_pfn, alloc_lo = iovad->start_pfn;
>
> if (size_aligned)
> align_mask <<= fls_long(size - 1);
> @@ -198,15 +199,25 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
>
> curr = __get_cached_rbnode(iovad, limit_pfn);
> curr_iova = rb_entry(curr, struct iova, node);
> + alloc_lo_new = curr_iova->pfn_hi;
> +
> +retry:
> do {
> - limit_pfn = min(limit_pfn, curr_iova->pfn_lo);
> - new_pfn = (limit_pfn - size) & align_mask;
> + alloc_hi = min(alloc_hi, curr_iova->pfn_lo);
> + new_pfn = (alloc_hi - size) & align_mask;
> prev = curr;
> curr = rb_prev(curr);
> curr_iova = rb_entry(curr, struct iova, node);
> } while (curr && new_pfn <= curr_iova->pfn_hi);
>
> - if (limit_pfn < size || new_pfn < iovad->start_pfn) {
> + if (alloc_hi < size || new_pfn < alloc_lo) {
> + if (alloc_lo == iovad->start_pfn && alloc_lo_new < limit_pfn) {
> + alloc_hi = limit_pfn;
> + alloc_lo = alloc_lo_new;
> + curr = &iovad->anchor.node;
> + curr_iova = rb_entry(curr, struct iova, node);
> + goto retry;
> + }
> iovad->max32_alloc_size = size;
> goto iova32_full;
> }
>
ping?
--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a
member of Code Aurora Forum, hosted by The Linux Foundation
Powered by blists - more mailing lists