lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 27 Jun 2012 14:27:44 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Rik van Riel <riel@...hat.com>
Cc:	Rik van Riel <riel@...riel.com>, linux-mm@...ck.org,
	akpm@...ux-foundation.org, aarcange@...hat.com, minchan@...il.com,
	kosaki.motohiro@...il.com, andi@...stfloor.org, hannes@...xchg.org,
	mel@....ul.ie, linux-kernel@...r.kernel.org, danielfsantos@....net
Subject: Re: [PATCH -mm v2 01/11] mm: track free size between VMAs in VMA
 rbtree

On Tue, 2012-06-26 at 11:49 -0400, Rik van Riel wrote:
> 
> However, doing an insert or delete changes the
> gap size for the _next_ vma, and potentially a
> change in the maximum gap size for the parent
> node, so both insert and delete cause two tree
> walks :( 

Right,.. don't have anything smart for that :/

I guess there's nothing to it but create a number of variants of
rb_insert/rb_erase, possibly using Daniel's 'template' stuff so we don't
actually have to maintain multiple copies of the code.

Maybe something simple like:

static void __always_inline
__rb_insert(struct rb_node *node, struct rb_root *root, rb_augment_f func, bool threaded)
{
	/* all the fancy code */
}

void rb_insert(struct rb_node *node, struct rb_root *root)
{
	__rb_insert(node, root, NULL, false);
}

void rb_insert_threaded(struct rb_node *node, struct rb_root *root)
{
	__rb_insert(node, root, NULL, true);
}

void rb_insert_augment(struct rb_node *node, struct rb_root *root, rb_augment_f func)
{
	__rb_insert(node, root, func, false);
}

void rb_insert_augment_threaded(struct rb_node *node, struct rb_root *root, rb_augment_f func)
{
	__rb_insert(node, root, func, true);
}

Would do, except it wouldn't be able to inline the augment function. For
that to happen we'd need to move __rb_insert() and the
__rb_insert_augment*() variants into rbtree.h.

But it would create clean variants without augmentation/threading
without too much duplicate code.


BTW, is there a reason rb_link_node() and rb_insert_color() are separate
functions? They seem to always be used together in sequence.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ