[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081101031056.GA6955@linux.vnet.ibm.com>
Date: Fri, 31 Oct 2008 20:10:56 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Eric Dumazet <dada1@...mosbay.com>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Corey Minyard <minyard@....org>,
David Miller <davem@...emloft.net>, shemminger@...tta.com,
benny+usenet@...rsen.dk, netdev@...r.kernel.org,
Christoph Lameter <cl@...ux-foundation.org>,
johnpol@....mipt.ru, Christian Bell <christian@...i.com>
Subject: Re: [PATCH 2/2] udp: RCU handling for Unicast packets.
On Fri, Oct 31, 2008 at 05:40:46PM +0100, Eric Dumazet wrote:
> Paul E. McKenney a écrit :
>> On Thu, Oct 30, 2008 at 12:30:20PM +0100, Eric Dumazet wrote:
>>> - while (udp_lib_lport_inuse(net, snum, udptable, sk,
>>> - saddr_comp)) {
>>> + for (;;) {
>>> + hslot = &udptable->hash[udp_hashfn(net, snum)];
>>> + spin_lock_bh(&hslot->lock);
>>> + if (!udp_lib_lport_inuse(net, snum, hslot, sk, saddr_comp))
>>> + break;
>>> + spin_unlock_bh(&hslot->lock);
>>> do {
>>> snum = snum + rand;
>>> } while (snum < low || snum > high);
>> The above -really- confuses me, but not part of this patch. If we are
>> out of range, keep going? Well, I guess that since it is a short, we
>> cannot go very far...
>>> if (snum == first)
>>> goto fail;
>> And I don't understand how we are guaranteed to have scanned all the
>> possible ports upon failure, but happy to leave that to you guys.
>
> Well, we have 65536(=2^16) possible port values, and while 'rand' is
> random,
> it has the interesting property/bias of being odd.
>
> We know (thanks modular arithmetic / congruence relation) we will hit
> all 65356 values exactly once, after exactly 65536 iterations.
Ah, got it! Thank you for the explanation!
I was fixating on the low..high interval. ;-)
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists