lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47E3F401.3070707@cosmosbay.com>
Date:	Fri, 21 Mar 2008 18:44:33 +0100
From:	Eric Dumazet <dada1@...mosbay.com>
To:	Stephen Hemminger <shemminger@...tta.com>
Cc:	paulmck@...ux.vnet.ibm.com, David Miller <davem@...emloft.net>,
	netdev@...r.kernel.org
Subject: Re: [PATCH net-2.6.26] fib_trie: RCU optimizations

Stephen Hemminger a écrit :
> On Fri, 21 Mar 2008 18:25:04 +0100
> Eric Dumazet <dada1@...mosbay.com> wrote:
>
>   
>> Paul E. McKenney a écrit :
>>     
>>> On Fri, Mar 21, 2008 at 07:55:21AM -0700, Stephen Hemminger wrote:
>>>   
>>>       
>>>> Small performance improvements.
>>>>
>>>> Eliminate unneeded barrier on deletion. The first pointer to update
>>>> the head of the list is ordered by the second call to rcu_assign_pointer.
>>>> See hlist_add_after_rcu or comparision.
>>>>
>>>> Move rcu_derference to the loop check (like hlist_for_each_rcu), and
>>>> add a prefetch.
>>>>     
>>>>         
>>> Acked-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
>>>
>>> Justification below.
>>>
>>>   
>>>       
>>>> Signed-off-by: Stephen Hemminger <shemminger@...tta.com>
>>>>
>>>> --- a/net/ipv4/route.c	2008-03-19 08:45:32.000000000 -0700
>>>> +++ b/net/ipv4/route.c	2008-03-19 08:54:57.000000000 -0700
>>>> @@ -977,8 +977,8 @@ restart:
>>>>  			 * must be visible to another weakly ordered CPU before
>>>>  			 * the insertion at the start of the hash chain.
>>>>  			 */
>>>> -			rcu_assign_pointer(rth->u.dst.rt_next,
>>>> -					   rt_hash_table[hash].chain);
>>>> +			rth->u.dst.rt_next = rt_hash_table[hash].chain;
>>>> +
>>>>     
>>>>         
>>> This is OK because it is finalizing a deletion.  If this were instead
>>> an insertion, this would of course be grossly illegal and dangerous.
>>>
>>>   
>>>       
>>>>  			/*
>>>>  			 * Since lookup is lockfree, the update writes
>>>>  			 * must be ordered for consistency on SMP.
>>>> @@ -2076,8 +2076,9 @@ int ip_route_input(struct sk_buff *skb, 
>>>>  	hash = rt_hash(daddr, saddr, iif);
>>>>
>>>>  	rcu_read_lock();
>>>> -	for (rth = rcu_dereference(rt_hash_table[hash].chain); rth;
>>>> -	     rth = rcu_dereference(rth->u.dst.rt_next)) {
>>>> +	for (rth = rt_hash_table[hash].chain; rcu_dereference(rth);
>>>> +	     rth = rth->u.dst.rt_next) {
>>>> +		prefetch(rth->u.dst.rt_next);
>>>>  		if (rth->fl.fl4_dst == daddr &&
>>>>  		    rth->fl.fl4_src == saddr &&
>>>>  		    rth->fl.iif == iif &&
>>>>     
>>>>         
>>> Works, though I would guess that increasingly aggressive compiler
>>> optimization will eventually force us to change the list.h macros
>>> to look like what you had to begin with...  Sigh!!!
>>>
>>>   
>>>       
>> Hum... I missed the original patch , but this prefetch() is wrong.
>>
>> On lookups, we dont want to prefetch the begining of "struct rtable" 
>> entries.
>>     
>
> That makes sense when hash is perfect, but under DoS scenario
> the hash table will not match exactly, and the next pointer will
> be needed.
>
>   

Hum... your prefetch () is usefull *only* if hash is perfect.

My point is : I care about DoS scenario :)

struct something {
                        char pad[128];  /* */
                        struct something *next;
                        int    key1;
                        int    key2:
};

struct something *lookup(int key1, int key2)
{
struct something *candidate, *next;
...
while (not found) {
    next = candidate->next:
    prefetch(next);    /* this is not usefull for lookup phase, since it 
brings next->pad[0..XX] */
    if (key1 == candidate->key1 && ...) { ... }
 ....
}




you really need something like    prefetch(&next->next);

But I already tested this in the past in this function and got no 
improvement at all.

Loop is so small that prefetches hints are throwed away by CPU, or the 
cost to setup the prefetch(register + offset) is too expensive...




--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ