[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081029215812.GH6732@linux.vnet.ibm.com>
Date: Wed, 29 Oct 2008 14:58:12 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Corey Minyard <minyard@....org>
Cc: Eric Dumazet <dada1@...mosbay.com>,
David Miller <davem@...emloft.net>, shemminger@...tta.com,
benny+usenet@...rsen.dk, netdev@...r.kernel.org,
Christoph Lameter <cl@...ux-foundation.org>,
a.p.zijlstra@...llo.nl, johnpol@....mipt.ru,
Christian Bell <christian@...i.com>
Subject: Re: [PATCH 2/2] udp: RCU handling for Unicast packets.
On Wed, Oct 29, 2008 at 04:29:19PM -0500, Corey Minyard wrote:
> Paul E. McKenney wrote:
>> O
> ..snip
>>> Hum... Another way of handling all those cases and avoid memory barriers
>>> would be to have different "NULL" pointers.
>>>
>>> Each hash chain should have a unique "NULL" pointer (in the case of UDP,
>>> it
>>> can be the 128 values : [ (void*)0 .. (void *)127 ]
>>>
>>> Then, when performing a lookup, a reader should check the "NULL" pointer
>>> it get at the end of its lookup has is the "hash" value of its chain.
>>>
>>> If not -> restart the loop, aka "goto begin;" :)
>>>
>>> We could avoid memory barriers then.
>>>
>>> In the two cases Corey mentioned, this trick could let us avoid memory
>>> barriers.
>>> (existing one in sk_add_node_rcu(sk, &hslot->head); should be enough)
>>>
>>> What do you think ?
>>>
>>
>> Kinky!!! ;-)
>>
> My thought exactly ;-).
>
>> Then the rcu_dereference() would be supplying the needed memory barriers.
>>
>> Hmmm... I guess that the only confusion would be if the element got
>> removed and then added to the same list. But then if its pointer was
>> pseudo-NULL, then that would mean that all subsequent elements had been
>> removed, and all preceding ones added after the scan started.
>>
>> Which might well be harmless, but I must defer to you on this one at
>> the moment.
>>
> I believe that is harmless, as re-scanning the same data should be fine.
>
>> If you need a larger hash table, another approach would be to set the
>> pointer's low-order bit, allowing the upper bits to be a full-sized
>> index -- or even a pointer to the list header. Just make very sure
>> to clear the pointer when freeing, or an element on the freelist
>> could end up looking like a legitimate end of list... Which again
>> might well be safe, but why inflict this on oneself?
>>
> Kind of my thought, too. That's a lot of work to avoid a single smb_wmb()
> on the socket creation path. Plus this could be extra confusing.
Just to be clear, I was fulminating against any potential failure to
clear the pseudo-NULL pointer, not against the pseudo-pointer itself.
This sort of trick is already used in some of the RCU-protected trees
(for example, FIB tree, IIRC), so I would look a bit funny fulminating
too hard against it. ;-)
The only other high-level approach I have come up with thus far is to
maintain separate hash tables for the long-lived UDP sockets (protected
by RCU) and for the short-lived UDP sockets (protected by locking).
Given the usual bimodal traffic pattern, most of the sockets are short
lived, but most of the data is transmitted over long-lived sockets. If a
socket receives more than N packets (10? 50? 100?), it is moved from the
short-lived table to the long-lived table. Sockets on the short-lived
table may be freed directly, while sockets on the long-lived table must
be RCU freed -- but this added overhead should be in the noise for a
long-lived connection. Lookups hit the RCU-protected table, then the lock
protected table, then the RCU-protected table again, but still holding
the lock. (Clearly, only search until you find the desired socket.)
However, I am not certain that this short-term/long-term approach is
better than the approach that Eric is proposing. It might in fact be
worse. But I throw it out anyway on the off-chance that it is helpful
as a comparison or as a solution to some future problem.
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists