[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090705173208.GB19477@ami.dom.local>
Date: Sun, 5 Jul 2009 19:32:08 +0200
From: Jarek Poplawski <jarkao2@...il.com>
To: Paweł Staszewski <pstaszewski@...are.pl>
Cc: Linux Network Development list <netdev@...r.kernel.org>,
Robert Olsson <robert@...ur.slu.se>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: Re: [PATCH net-2.6] Re: rib_trie / Fix inflate_threshold_root.
Now=15 size=11 bits
On Sun, Jul 05, 2009 at 06:20:03PM +0200, Jarek Poplawski wrote:
> On Sun, Jul 05, 2009 at 02:30:03AM +0200, Paweł Staszewski wrote:
> > Oh
> >
> > I forgot - please Jarek give me patch with sync rcu and i will make test
> > on preempt kernel
>
> Probably non-preempt kernel might need something like this more, but
> comparing is always interesting. This patch is based on Paul's
> suggestion (I hope).
Hold on ;-) Here is something even better... Syncing after 128 pages
might be still too slow, so here is a higher initial value, 1000, plus
you can change this while testing in:
/sys/module/fib_trie/parameters/sync_pages
It would be interesting to find the lowest acceptable value.
Jarek P.
---> (synchronize take 8; apply on top of the 2.6.29.x with the last
all-in-one patch, or net-2.6)
net/ipv4/fib_trie.c | 12 ++++++++++++
1 files changed, 12 insertions(+), 0 deletions(-)
diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
index 00a54b2..decc8d0 100644
--- a/net/ipv4/fib_trie.c
+++ b/net/ipv4/fib_trie.c
@@ -71,6 +71,7 @@
#include <linux/netlink.h>
#include <linux/init.h>
#include <linux/list.h>
+#include <linux/moduleparam.h>
#include <net/net_namespace.h>
#include <net/ip.h>
#include <net/protocol.h>
@@ -164,6 +165,10 @@ static struct tnode *inflate(struct trie *t, struct tnode *tn);
static struct tnode *halve(struct trie *t, struct tnode *tn);
/* tnodes to free after resize(); protected by RTNL */
static struct tnode *tnode_free_head;
+static size_t tnode_free_size;
+
+static int sync_pages __read_mostly = 1000;
+module_param(sync_pages, int, 0640);
static struct kmem_cache *fn_alias_kmem __read_mostly;
static struct kmem_cache *trie_leaf_kmem __read_mostly;
@@ -393,6 +398,8 @@ static void tnode_free_safe(struct tnode *tn)
BUG_ON(IS_LEAF(tn));
tn->tnode_free = tnode_free_head;
tnode_free_head = tn;
+ tnode_free_size += sizeof(struct tnode) +
+ (sizeof(struct node *) << tn->bits);
}
static void tnode_free_flush(void)
@@ -404,6 +411,11 @@ static void tnode_free_flush(void)
tn->tnode_free = NULL;
tnode_free(tn);
}
+
+ if (tnode_free_size >= PAGE_SIZE * sync_pages) {
+ tnode_free_size = 0;
+ synchronize_rcu();
+ }
}
static struct leaf *leaf_new(void)
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists