[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241115075203.ojspk255cw3sr3s3@master>
Date: Fri, 15 Nov 2024 07:52:03 +0000
From: Wei Yang <richard.weiyang@...il.com>
To: Sidhartha Kumar <sidhartha.kumar@...cle.com>
Cc: linux-kernel@...r.kernel.org, maple-tree@...ts.infradead.org,
linux-mm@...ck.org, akpm@...ux-foundation.org,
liam.howlett@...cle.com
Subject: Re: [PATCH 3/5] maple_tree: use vacant nodes to reduce worst case
allocations
On Thu, Nov 14, 2024 at 12:05:22PM -0500, Sidhartha Kumar wrote:
>In order to determine the store type for a maple tree operation, a walk
>of the tree is done through mas_wr_walk(). This function descends the
>tree until a spanning write is detected or we reach a leaf node. While
>descending, keep track of the height at which we encounter a node with
>available space. This is done by checking if mas->end is less than the
>number of slots a given node type can fit.
>
>Now that the height of the vacant node is tracked, we can use the
>difference between the height of the tree and the height of the vacant
>node to know how many levels we will have to propagate creating new
>nodes. Update mas_prealloc_calc() to consider the vacant height and
>reduce the number of worst allocations.
>
>Rebalancing stores are not supported and fall back to using the full
>height of the tree for allocations.
>
>Update preallocation testing assertions to take into account vacant
>height.
>
>Signed-off-by: Sidhartha <sidhartha.kumar@...cle.com>
>---
> include/linux/maple_tree.h | 2 +
> lib/maple_tree.c | 13 +++--
> tools/testing/radix-tree/maple.c | 97 +++++++++++++++++++++++++++++---
> 3 files changed, 100 insertions(+), 12 deletions(-)
>
>diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
>index cbbcd18d4186..7d777aa2d9ed 100644
>--- a/include/linux/maple_tree.h
>+++ b/include/linux/maple_tree.h
>@@ -463,6 +463,7 @@ struct ma_wr_state {
> void __rcu **slots; /* mas->node->slots pointer */
> void *entry; /* The entry to write */
> void *content; /* The existing entry that is being overwritten */
>+ unsigned char vacant_height; /* Depth of lowest node with free space */
^^^ ^^^
Would this be a little misleading?
> };
>
> #define mas_lock(mas) spin_lock(&((mas)->tree->ma_lock))
>@@ -498,6 +499,7 @@ struct ma_wr_state {
> .mas = ma_state, \
> .content = NULL, \
> .entry = wr_entry, \
>+ .vacant_height = 0 \
> }
>
> #define MA_TOPIARY(name, tree) \
>diff --git a/lib/maple_tree.c b/lib/maple_tree.c
>index 21289e350382..f14d70c171c2 100644
>--- a/lib/maple_tree.c
>+++ b/lib/maple_tree.c
>@@ -3545,6 +3545,9 @@ static bool mas_wr_walk(struct ma_wr_state *wr_mas)
> if (ma_is_leaf(wr_mas->type))
> return true;
>
>+ if (mas->end < mt_slots[wr_mas->type] - 1)
>+ wr_mas->vacant_height = mas->depth + 1;
For some cases in rebalance, we may split data into three parts, which means
we need 2 extra vacant slot.
Maybe this check is not accurate?
>+
> mas_wr_walk_traverse(wr_mas);
> }
>
>@@ -4159,7 +4162,9 @@ static inline void mas_wr_prealloc_setup(struct ma_wr_state *wr_mas)
> static inline int mas_prealloc_calc(struct ma_wr_state *wr_mas, void *entry)
> {
> struct ma_state *mas = wr_mas->mas;
>- int ret = mas_mt_height(mas) * 3 + 1;
>+ unsigned char height = mas_mt_height(mas);
>+ int ret = height * 3 + 1;
>+ unsigned char delta = height - wr_mas->vacant_height;
>
> switch (mas->store_type) {
> case wr_invalid:
>@@ -4177,13 +4182,13 @@ static inline int mas_prealloc_calc(struct ma_wr_state *wr_mas, void *entry)
> ret = 0;
> break;
> case wr_spanning_store:
>- ret = mas_mt_height(mas) * 3 + 1;
>+ ret = delta * 3 + 1;
Hmm... I am afraid we need to put this patch after next one.
Without the change in next patch, we still need to go up the tree till root to
rebalance.
> break;
> case wr_split_store:
>- ret = mas_mt_height(mas) * 2 + 1;
>+ ret = delta * 2 + 1;
> break;
> case wr_rebalance:
>- ret = mas_mt_height(mas) * 2 - 1;
>+ ret = height * 2 + 1;
Looks current calculation is not correct?
If so, do we need to have a fix to be backported?
--
Wei Yang
Help you, Help me
Powered by blists - more mailing lists