[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230627144513.fp5osnsqhn3rgvs3@revolver>
Date: Tue, 27 Jun 2023 10:45:13 -0400
From: "Liam R. Howlett" <Liam.Howlett@...cle.com>
To: Peng Zhang <zhangpeng.00@...edance.com>
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, maple-tree@...ts.infradead.org
Subject: Re: [PATCH v3 4/4] maple_tree: add a fast path case in
mas_wr_slot_store()
* Peng Zhang <zhangpeng.00@...edance.com> [230615 04:43]:
> When expanding a range in two directions, only partially overwriting the
> previous and next ranges, the number of entries will not be increased, so
> we can just update the pivots as a fast path. However, it may introduce
> potential risks in RCU mode (although it may pass the test), because it
> updates two pivots. We only enable it in non-RCU mode for now.
You've fixed the above test passing without the RCU bit set, so this
comment should be removed.
>
> Signed-off-by: Peng Zhang <zhangpeng.00@...edance.com>
> ---
> lib/maple_tree.c | 36 ++++++++++++++++++++++++------------
> 1 file changed, 24 insertions(+), 12 deletions(-)
>
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index da4af6743b30..bff6531fd0bc 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -4100,23 +4100,35 @@ static inline bool mas_wr_slot_store(struct ma_wr_state *wr_mas)
> {
> struct ma_state *mas = wr_mas->mas;
> unsigned char offset = mas->offset;
> + void __rcu **slots = wr_mas->slots;
> bool gap = false;
>
> - if (wr_mas->offset_end - offset != 1)
> - return false;
> -
> - gap |= !mt_slot_locked(mas->tree, wr_mas->slots, offset);
> - gap |= !mt_slot_locked(mas->tree, wr_mas->slots, offset + 1);
> + gap |= !mt_slot_locked(mas->tree, slots, offset);
> + gap |= !mt_slot_locked(mas->tree, slots, offset + 1);
>
> - if (mas->index == wr_mas->r_min) {
> - /* Overwriting the range and over a part of the next range. */
> - rcu_assign_pointer(wr_mas->slots[offset], wr_mas->entry);
> - wr_mas->pivots[offset] = mas->last;
> - } else {
> - /* Overwriting a part of the range and over the next range */
> - rcu_assign_pointer(wr_mas->slots[offset + 1], wr_mas->entry);
> + if (wr_mas->offset_end - offset == 1) {
> + if (mas->index == wr_mas->r_min) {
> + /* Overwriting the range and a part of the next one */
> + rcu_assign_pointer(slots[offset], wr_mas->entry);
> + wr_mas->pivots[offset] = mas->last;
> + } else {
> + /* Overwriting a part of the range and the next one */
> + rcu_assign_pointer(slots[offset + 1], wr_mas->entry);
> + wr_mas->pivots[offset] = mas->index - 1;
> + mas->offset++; /* Keep mas accurate. */
> + }
> + } else if (!mt_in_rcu(mas->tree)) {
> + /*
> + * Expand the range, only partially overwriting the previous and
> + * next ranges
> + */
> + gap |= !mt_slot_locked(mas->tree, slots, offset + 2);
> + rcu_assign_pointer(slots[offset + 1], wr_mas->entry);
> wr_mas->pivots[offset] = mas->index - 1;
> + wr_mas->pivots[offset + 1] = mas->last;
> mas->offset++; /* Keep mas accurate. */
> + } else {
> + return false;
> }
>
> trace_ma_write(__func__, mas, 0, wr_mas->entry);
> --
> 2.20.1
>
Powered by blists - more mailing lists