[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081006225210.GA29794@hmsreliant.think-freely.org>
Date: Mon, 6 Oct 2008 18:52:10 -0400
From: Neil Horman <nhorman@...driver.com>
To: Eric Dumazet <dada1@...mosbay.com>
Cc: Bill Fink <billfink@...dspring.com>,
David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
kuznet@....inr.ac.ru, pekkas@...core.fi, jmorris@...ei.org,
yoshfuji@...ux-ipv6.org, kaber@...sh.net,
Evgeniy Polyakov <johnpol@....mipt.ru>
Subject: Re: [PATCH] net: implement emergency route cache rebulds when
gc_elasticity is exceeded
On Mon, Oct 06, 2008 at 11:21:38PM +0200, Eric Dumazet wrote:
> Neil Horman a écrit :
>> So, I've been playing with this patch, and I've not figured out eactly whats
>> bothering me yet, since the math seems right, but something doesn't seem right
>> about the outcome of this algorithm. I've tested with my local system, and all
>> works well, because the route cache is well behaved, and the sd value always
>> works out to be very small, so ip_rt_gc_elasticity is used. So I've been
>> working through some scenarios by hand to see what this looks like using larger
>> numbers. If i assume ip_rt_gc_interval is 60, and rt_hash_log is 17, my sample
>> count here is 7864320 samples per run. If within that sample 393216 (about 4%)
>> of the buckets have one entry on the chain, and all the rest are zeros, my hand
>> calculations result in a standard deviation of approximately 140 and an average
>> of .4. That imples that in that sample set any one chain could be almost 500
>> entires long before it triggered a cache rebuld. Does that seem reasonable?
>>
>
> if rt_hash_log is 17, and interval is 60, then you should scan (60 <<
> 17)/300 slots. That's 26214 slots. (ie 20% of the 2^17 slots)
>
> I have no idea how you can have sd = 140, even if scaled by (1 << 3)
> with slots being empty or with one entry only...
>
I don't either, that was my concern :).
> If 4% of your slots have one element, then average length is 0.04 :)
>
Yes, and the average worked out properly, which is why I was concerned.
If you take an even simpler case, like you state above (I admit I miseed the
/300 part of the sample, but no matter).
samples = 26214
Assume each sample has a chain length of 1
sum = 26214 * (1 << 3) = 209712
sum2 = sum * sum = s09712 * 209712 = 43979122944
avg = sum / samples = 209712 / 26214 = 8 (correct)
sd = sqrt(sum2 / samples - avg*avg) = sqrt(43979122944/26214 - 64) = 1295
sd >> 3 = 1295.23 >> 3 = 161
Clearly, given the assumption that every chain in the sample set has 1 entry,
giving us an average of one, the traditional method of computing standard
deviation should have yielded an sd of 0 exactly, since every sample was
precisely the average. However, the math above gives us something significantly
larger. I'm hoping I missed something, but I don't think I have.
Neil
>
>
>
>
>
--
/****************************************************
* Neil Horman <nhorman@...driver.com>
* Software Engineer, Red Hat
****************************************************/
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists