lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47E3EF70.6080000@cosmosbay.com>
Date:	Fri, 21 Mar 2008 18:25:04 +0100
From:	Eric Dumazet <dada1@...mosbay.com>
To:	paulmck@...ux.vnet.ibm.com
Cc:	Stephen Hemminger <shemminger@...tta.com>,
	David Miller <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: [PATCH net-2.6.26] fib_trie: RCU optimizations

Paul E. McKenney a écrit :
> On Fri, Mar 21, 2008 at 07:55:21AM -0700, Stephen Hemminger wrote:
>   
>> Small performance improvements.
>>
>> Eliminate unneeded barrier on deletion. The first pointer to update
>> the head of the list is ordered by the second call to rcu_assign_pointer.
>> See hlist_add_after_rcu or comparision.
>>
>> Move rcu_derference to the loop check (like hlist_for_each_rcu), and
>> add a prefetch.
>>     
>
> Acked-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
>
> Justification below.
>
>   
>> Signed-off-by: Stephen Hemminger <shemminger@...tta.com>
>>
>> --- a/net/ipv4/route.c	2008-03-19 08:45:32.000000000 -0700
>> +++ b/net/ipv4/route.c	2008-03-19 08:54:57.000000000 -0700
>> @@ -977,8 +977,8 @@ restart:
>>  			 * must be visible to another weakly ordered CPU before
>>  			 * the insertion at the start of the hash chain.
>>  			 */
>> -			rcu_assign_pointer(rth->u.dst.rt_next,
>> -					   rt_hash_table[hash].chain);
>> +			rth->u.dst.rt_next = rt_hash_table[hash].chain;
>> +
>>     
>
> This is OK because it is finalizing a deletion.  If this were instead
> an insertion, this would of course be grossly illegal and dangerous.
>
>   
>>  			/*
>>  			 * Since lookup is lockfree, the update writes
>>  			 * must be ordered for consistency on SMP.
>> @@ -2076,8 +2076,9 @@ int ip_route_input(struct sk_buff *skb, 
>>  	hash = rt_hash(daddr, saddr, iif);
>>
>>  	rcu_read_lock();
>> -	for (rth = rcu_dereference(rt_hash_table[hash].chain); rth;
>> -	     rth = rcu_dereference(rth->u.dst.rt_next)) {
>> +	for (rth = rt_hash_table[hash].chain; rcu_dereference(rth);
>> +	     rth = rth->u.dst.rt_next) {
>> +		prefetch(rth->u.dst.rt_next);
>>  		if (rth->fl.fl4_dst == daddr &&
>>  		    rth->fl.fl4_src == saddr &&
>>  		    rth->fl.iif == iif &&
>>     
>
> Works, though I would guess that increasingly aggressive compiler
> optimization will eventually force us to change the list.h macros
> to look like what you had to begin with...  Sigh!!!
>
>   

Hum... I missed the original patch , but this prefetch() is wrong.

On lookups, we dont want to prefetch the begining of "struct rtable" 
entries.

We were very carefull in the past
( 
http://git2.kernel.org/?p=linux/kernel/git/davem/net-2.6.26.git;a=commit;h=1e19e02ca0c5e33ea73a25127dbe6c3b8fcaac4b 

[NET]: Reorder fields of struct dst_entry )
 to place the "next pointer" at the end of "struct dst" so that lookups 
only bring one cache line per entry.

Thank you




--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ