[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJhGHyDp4LYcd8xzL761UYAhj6AY=R4Uik6b4tVN1sqyw95Dsg@mail.gmail.com>
Date: Sat, 23 May 2020 08:56:21 +0800
From: Lai Jiangshan <jiangshanlai+lkml@...il.com>
To: Lai Jiangshan <laijs@...ux.alibaba.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
"Paul E . McKenney" <paulmck@...nel.org>,
Oleg Nesterov <oleg@...hat.com>,
Michel Lespinasse <walken@...gle.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Rik van Riel <riel@...hat.com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Subject: Re: [PATCH V2 1/2] rbtree_latch: quit searching when reaching to
maximum depth
On Sat, May 16, 2020 at 12:01 AM Lai Jiangshan <laijs@...ux.alibaba.com> wrote:
>
> lib/rbtree.c has ensured that there is not possible to
> inadvertently cause (temporary) loops in the tree structure
> as seen in program order of the modifier. But loop is still
> possible to be seen in searcher due to CPU's reordering.
>
> for example:
> modifier searcher
>
> left rotate at parent
> parent->rb_right is node
> search to parent
> parent->rb_right is node
> +->see node->rb_left changed
> WRITE_ONCE(parent->rb_right, tmp);-+ | node->rb_left is parennt
> no smp_wmb(), some ARCHs can | |
> reorder these two writes | | loop long between
> WRITE_ONCE(node->rb_left, parent);-+-+ parent and node
> |
> +--->finally see
> parent->rb_right
>
> The long loop won't stop until the modifer's CPU flushes
> its writes. Too avoid it, we should limit the searching depth.
> There are no more than (1<<BITS_PER_LONG)-1 nodes in the tree.
> And the max_depth of a tree is no more than 2*lg(node_count+1),
> which is no mare than 2*BITS_PER_LONG.
>
> So the searching should stop when diving down up to
> 2*BITS_PER_LONG depth.
>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Paul E. McKenney <paulmck@...nel.org>
> Cc: Oleg Nesterov <oleg@...hat.com>
> Cc: Michel Lespinasse <walken@...gle.com>
> Cc: Andrea Arcangeli <aarcange@...hat.com>
> Cc: Rik van Riel <riel@...hat.com>
> Cc: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
> Signed-off-by: Lai Jiangshan <laijs@...ux.alibaba.com>
> ---
Hello
Could anyone have a review or an ack on the updated patch?
Compared v1:
Applied Mathieu's suggest to change the changelog
Avoid the depth check on x86, so the patch makes no functionality
change on x86
And which tree should the patches route to? It is memory ordering
related.
Thanks
Lai
> include/linux/rbtree_latch.h | 39 ++++++++++++++++++++++++++++++++++++
> 1 file changed, 39 insertions(+)
>
> diff --git a/include/linux/rbtree_latch.h b/include/linux/rbtree_latch.h
> index 7d012faa509a..638942f53c0a 100644
> --- a/include/linux/rbtree_latch.h
> +++ b/include/linux/rbtree_latch.h
> @@ -102,11 +102,47 @@ __lt_erase(struct latch_tree_node *ltn, struct latch_tree_root *ltr, int idx)
> rb_erase(<n->node[idx], <r->tree[idx]);
> }
>
> +/*
> + * Beware when rbtree is being searched in RCU read sites.
> + *
> + * lib/rbtree.c has ensured that there is not possible to
> + * inadvertently cause (temporary) loops in the tree structure
> + * as seen in program order of the modifier. But loop is still
> + * possible to be seen in searcher due to CPU's reordering.
> + *
> + * for example:
> + * modifier searcher
> + *
> + * left rotate at parent search to parent
> + * parent->rb_right is node parent->rb_right is node
> + * +->see node->rb_left changed
> + * WRITE_ONCE(parent->rb_right, tmp);-+ | node->rb_left is parennt
> + * no smp_wmb(), some ARCHs can | |
> + * reorder these two writes | | loop long between
> + * WRITE_ONCE(node->rb_left, parent);-+-+ parent and node
> + * |
> + * +--->finally see
> + * parent->rb_right
> + *
> + * The long loop won't stop until the searcher finally see the changes
> + * from the modifier. Too avoid it, we should limit the searching depth.
> + *
> + * Limited by the address space of the kernel, there are no more than
> + * (1<<BITS_PER_LONG)-1 nodes in the tree. And the max_depth of a tree is
> + * no more than 2*lg(node_count+1), which is no mare than 2*BITS_PER_LONG.
> + *
> + * So the searching should stop when diving down up to
> + * 2*BITS_PER_LONG depth.
> + *
> + * Note: the above problem is not subject to the TSO memory model, such as
> + * x86, which can dispense with the depth check.
> + */
> static __always_inline struct latch_tree_node *
> __lt_find(void *key, struct latch_tree_root *ltr, int idx,
> int (*comp)(void *key, struct latch_tree_node *node))
> {
> struct rb_node *node = rcu_dereference_raw(ltr->tree[idx].rb_node);
> + int depth = 2 * BITS_PER_LONG;
> struct latch_tree_node *ltn;
> int c;
>
> @@ -120,6 +156,9 @@ __lt_find(void *key, struct latch_tree_root *ltr, int idx,
> node = rcu_dereference_raw(node->rb_right);
> else
> return ltn;
> +
> + if (!IS_ENABLED(CONFIG_X86) && (--depth < 0))
> + break;
> }
>
> return NULL;
> --
> 2.20.1
>
Powered by blists - more mailing lists