lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A53DA9C.5000202@itcare.pl>
Date:	Wed, 08 Jul 2009 01:30:36 +0200
From:	Paweł Staszewski <pstaszewski@...are.pl>
To:	Jarek Poplawski <jarkao2@...il.com>
CC:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Linux Network Development list <netdev@...r.kernel.org>,
	Robert Olsson <robert@...ur.slu.se>
Subject: Re: [PATCH net-2.6] Re: rib_trie / Fix inflate_threshold_root.	Now=15
 size=11 bits

Paweł Staszewski pisze:
> Jarek Poplawski pisze:
>> On Mon, Jul 06, 2009 at 01:53:49AM +0200, Paweł Staszewski wrote:
>> ...
>>  
>>> So i make tests with changing sync_pages
>>> And
>>>
>>> ####################################
>>> sync_pages: 64
>>> total size reach maximum in 17sec
>>>     
>> ...
>>  
>>> ######################################
>>> sync_pages: 128
>>> Fib trie Total size reach max in 14sec
>>>     
>> ...
>>  
>>> #########################################
>>> sync_pages: 256
>>> hmm no difference also in 10sec
>>>     
>>
>> 14 == 10!? ;-)
>> ...
>>   
> :) i miss one test
>>> And with sync_pages higher that 256 time of filling kernel routes is 
>>> the  same approx 10sec.
>>>     
>>
>> Hmm... So, it's better than I expected; syncing after 128 or 256 pages
>> could be quite reasonable. But then it would be interesting to find
>> out if with such a safety we could go back to more aggressive values
>> for possibly better performance. So here is 'the same' patch (so the
>> previous, take 8, should be reverted), but with additional possibility
>> to change:
>> /sys/module/fib_trie/parameters/inflate_threshold_root
>>
>> I guess, you could try e.g. if: sync_pages 256, 
>> inflate_threshold_root 15
>> can give faster lookups (or lower cpu loads); with this these inflate
>> warnings could be back btw.; or maybe you'll find something in between
>> like inflate_threshold_root 20 is optimal for you. (I think it should be
>> enough to try this only for PREEMPT_NONE unless you have spare time ;-)
>>
>>   
> And i can't make good tests with cpu load because of problem that i 
> have from "weird problem" emails
> It depend when i make mpstat to check cpu load and for what time 
> because every 15 sec i have 1 do 3 % of cpu and after 15 sec i have 
> almost 40% cpu load for next 15 sec.
> I try to make mpstat -P ALL 1 60
> but after 15 sec of 1 to 3 % cpu load this next higher cpu load if 
> different everytime it balance from 30 to 50%
>
> so i make test shorter when cpu load is 1 to 3 % - "mpstat -P ALL 1 10"
> output in attached file
>
> Regards
> Paweł Staszewski
>
i forgot to add:
Traffic when i make test was +/- 10Mbit/s in next tests:
eth0:         RX: 231.21 Mb/s          TX: 287.40 Mb/s    
eth1:         RX: 289.19 Mb/s          TX: 231.35 Mb/s    

>
>> Thanks,
>> Jarek P.
>> ---> (synchronize take 9; apply on top of the 2.6.29.x with the last
>>       all-in-one patch, or net-2.6)
>>
>>  net/ipv4/fib_trie.c |   18 ++++++++++++++++--
>>  1 files changed, 16 insertions(+), 2 deletions(-)
>>
>> diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
>> index 00a54b2..e8fca11 100644
>> --- a/net/ipv4/fib_trie.c
>> +++ b/net/ipv4/fib_trie.c
>> @@ -71,6 +71,7 @@
>>  #include <linux/netlink.h>
>>  #include <linux/init.h>
>>  #include <linux/list.h>
>> +#include <linux/moduleparam.h>
>>  #include <net/net_namespace.h>
>>  #include <net/ip.h>
>>  #include <net/protocol.h>
>> @@ -164,6 +165,10 @@ static struct tnode *inflate(struct trie *t, 
>> struct tnode *tn);
>>  static struct tnode *halve(struct trie *t, struct tnode *tn);
>>  /* tnodes to free after resize(); protected by RTNL */
>>  static struct tnode *tnode_free_head;
>> +static size_t tnode_free_size;
>> +
>> +static int sync_pages __read_mostly = 1000;
>> +module_param(sync_pages, int, 0640);
>>  
>>  static struct kmem_cache *fn_alias_kmem __read_mostly;
>>  static struct kmem_cache *trie_leaf_kmem __read_mostly;
>> @@ -316,9 +321,11 @@ static inline void check_tnode(const struct 
>> tnode *tn)
>>  
>>  static const int halve_threshold = 25;
>>  static const int inflate_threshold = 50;
>> -static const int halve_threshold_root = 15;
>> -static const int inflate_threshold_root = 25;
>>  
>> +static int inflate_threshold_root __read_mostly = 25;
>> +module_param(inflate_threshold_root, int, 0640);
>> +
>> +#define halve_threshold_root    (inflate_threshold_root / 2 + 1)
>>  
>>  static void __alias_free_mem(struct rcu_head *head)
>>  {
>> @@ -393,6 +400,8 @@ static void tnode_free_safe(struct tnode *tn)
>>      BUG_ON(IS_LEAF(tn));
>>      tn->tnode_free = tnode_free_head;
>>      tnode_free_head = tn;
>> +    tnode_free_size += sizeof(struct tnode) +
>> +               (sizeof(struct node *) << tn->bits);
>>  }
>>  
>>  static void tnode_free_flush(void)
>> @@ -404,6 +413,11 @@ static void tnode_free_flush(void)
>>          tn->tnode_free = NULL;
>>          tnode_free(tn);
>>      }
>> +
>> +    if (tnode_free_size >= PAGE_SIZE * sync_pages) {
>> +        tnode_free_size = 0;
>> +        synchronize_rcu();
>> +    }
>>  }
>>  
>>  static struct leaf *leaf_new(void)
>> -- 
>> To unsubscribe from this list: send the line "unsubscribe netdev" in
>> the body of a message to majordomo@...r.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>>
>>   
>

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ