[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230425110511.11680-4-zhangpeng.00@bytedance.com>
Date: Tue, 25 Apr 2023 19:05:05 +0800
From: Peng Zhang <zhangpeng.00@...edance.com>
To: Liam.Howlett@...cle.com
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, maple-tree@...ts.infradead.org,
Peng Zhang <zhangpeng.00@...edance.com>
Subject: [PATCH 3/9] maple_tree: Modify the allocation method of mtree_alloc_range/rrange()
Let mtree_alloc_range() and mtree_alloc_rrange() use mas_empty_area()
and mas_empty_area_rev() respectively for allocation to reduce code
redundancy. And after doing this, we don't need to maintain two logically
identical codes to improve maintainability.
In fact, mtree_alloc_range/rrange() has some bugs. For example, when
dealing with min equals to max (mas_empty_area/area_rev() has been fixed),
the allocation will fail.
There are still some other bugs in it, I saw it with my naked eyes, but
I didn't test it, for example:
When mtree_alloc_range()->mas_alloc()->mas_awalk(), we set mas.index = min,
mas.last = max - size. However, mas_awalk() requires mas.index = min,
mas.last = max, which may lead to allocation failures.
Right now no users are using these two functions so the bug won't trigger,
but this might trigger in the future.
Also use mas_store_gfp() instead of mas_fill_gap() as I don't see any
difference between them.
After doing this, we no longer need the three functions
mas_fill_gap(), mas_alloc(), and mas_rev_alloc().
Fixes: 54a611b60590 ("Maple Tree: add new data structure")
Signed-off-by: Peng Zhang <zhangpeng.00@...edance.com>
---
lib/maple_tree.c | 45 ++++++++++++---------------------------------
1 file changed, 12 insertions(+), 33 deletions(-)
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index aa55c914818a0..294d4c8668323 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -6362,32 +6362,20 @@ int mtree_alloc_range(struct maple_tree *mt, unsigned long *startp,
{
int ret = 0;
- MA_STATE(mas, mt, min, max - size);
+ MA_STATE(mas, mt, 0, 0);
if (!mt_is_alloc(mt))
return -EINVAL;
if (WARN_ON_ONCE(mt_is_reserved(entry)))
return -EINVAL;
- if (min > max)
- return -EINVAL;
-
- if (max < size)
- return -EINVAL;
-
- if (!size)
- return -EINVAL;
-
mtree_lock(mt);
-retry:
- mas.offset = 0;
- mas.index = min;
- mas.last = max - size;
- ret = mas_alloc(&mas, entry, size, startp);
- if (mas_nomem(&mas, gfp))
- goto retry;
-
+ ret = mas_empty_area(&mas, min, max, size);
+ if (!ret)
+ ret = mas_store_gfp(&mas, entry, gfp);
mtree_unlock(mt);
+ if (!ret)
+ *startp = mas.index;
return ret;
}
EXPORT_SYMBOL(mtree_alloc_range);
@@ -6398,29 +6386,20 @@ int mtree_alloc_rrange(struct maple_tree *mt, unsigned long *startp,
{
int ret = 0;
- MA_STATE(mas, mt, min, max - size);
+ MA_STATE(mas, mt, 0, 0);
if (!mt_is_alloc(mt))
return -EINVAL;
if (WARN_ON_ONCE(mt_is_reserved(entry)))
return -EINVAL;
- if (min >= max)
- return -EINVAL;
-
- if (max < size - 1)
- return -EINVAL;
-
- if (!size)
- return -EINVAL;
-
mtree_lock(mt);
-retry:
- ret = mas_rev_alloc(&mas, min, max, entry, size, startp);
- if (mas_nomem(&mas, gfp))
- goto retry;
-
+ ret = mas_empty_area_rev(&mas, min, max, size);
+ if (!ret)
+ ret = mas_store_gfp(&mas, entry, gfp);
mtree_unlock(mt);
+ if (!ret)
+ *startp = mas.index;
return ret;
}
EXPORT_SYMBOL(mtree_alloc_rrange);
--
2.20.1
Powered by blists - more mailing lists