lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1344262669.27828.55.camel@twins>
Date:	Mon, 06 Aug 2012 16:17:49 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Michel Lespinasse <walken@...gle.com>
Cc:	riel@...hat.com, daniel.santos@...ox.com, aarcange@...hat.com,
	dwmw2@...radead.org, akpm@...ux-foundation.org, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, torvalds@...ux-foundation.org
Subject: Re: [PATCH v2 8/9] rbtree: faster augmented rbtree manipulation

On Thu, 2012-08-02 at 15:34 -0700, Michel Lespinasse wrote:
> +static void augment_propagate(struct rb_node *rb, struct rb_node *stop)
> +{
> +       while (rb != stop) {
> +               struct interval_tree_node *node =
> +                       rb_entry(rb, struct interval_tree_node, rb);
> +               unsigned long subtree_last = compute_subtree_last(node);
> +               if (node->__subtree_last == subtree_last)
> +                       break;
> +               node->__subtree_last = subtree_last;
> +               rb = rb_parent(&node->rb);
> +       }
> +}
> +
> +static void augment_copy(struct rb_node *rb_old, struct rb_node *rb_new)
> +{
> +       struct interval_tree_node *old =
> +               rb_entry(rb_old, struct interval_tree_node, rb);
> +       struct interval_tree_node *new =
> +               rb_entry(rb_new, struct interval_tree_node, rb);
> +
> +       new->__subtree_last = old->__subtree_last;
> +}
> +
> +static void augment_rotate(struct rb_node *rb_old, struct rb_node *rb_new)
> +{
> +       struct interval_tree_node *old =
> +               rb_entry(rb_old, struct interval_tree_node, rb);
> +       struct interval_tree_node *new =
> +               rb_entry(rb_new, struct interval_tree_node, rb);
> +
> +       new->__subtree_last = old->__subtree_last;
> +       old->__subtree_last = compute_subtree_last(old);
> +} 

I still don't get why we need the 3 callbacks when both propagate and
rotate are simple variants of the original callback
(compute_subtree_last, in this instance).

Why would every user need to replicate the propagate and rotate
boilerplate?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ