[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49ED5813.1000803@cn.fujitsu.com>
Date: Tue, 21 Apr 2009 13:22:27 +0800
From: Lai Jiangshan <laijs@...fujitsu.com>
To: Eric Dumazet <dada1@...mosbay.com>
CC: Stephen Hemminger <shemminger@...tta.com>,
Paul Mackerras <paulus@...ba.org>, paulmck@...ux.vnet.ibm.com,
Evgeniy Polyakov <zbr@...emap.net>,
David Miller <davem@...emloft.net>, kaber@...sh.net,
torvalds@...ux-foundation.org, jeff.chua.linux@...il.com,
mingo@...e.hu, jengelh@...ozas.de, r000n@...0n.net,
linux-kernel@...r.kernel.org, netfilter-devel@...r.kernel.org,
netdev@...r.kernel.org, benh@...nel.crashing.org,
mathieu.desnoyers@...ymtl.ca
Subject: Re: [PATCH] netfilter: use per-cpu recursive lock (v11)
Eric Dumazet wrote:
> Lai Jiangshan a écrit :
>> Stephen Hemminger wrote:
>>> +/**
>>> + * xt_table_info_rdlock_bh - recursive read lock for xt table info
>>> + *
>>> + * Table processing calls this to hold off any changes to table
>>> + * (on current CPU). Always leaves with bottom half disabled.
>>> + * If called recursively, then assumes bh/preempt already disabled.
>>> + */
>>> +void xt_info_rdlock_bh(void)
>>> +{
>>> + struct xt_info_lock *lock;
>>> +
>>> + preempt_disable();
>>> + lock = &__get_cpu_var(xt_info_locks);
>>> + if (likely(++lock->depth == 0))
>> Maybe I missed something. I think softirq may be still enabled here.
>> So what happen when xt_info_rdlock_bh() called recursively here?
>
> well, first time its called, you are right softirqs are enabled until
> the point we call spin_lock_bh(), right after this line :
xt_info_rdlock_bh() called recursively here will enter the
critical region without &__get_cpu_var(xt_info_locks)->lock.
Because xt_info_rdlock_bh() called recursively here sees
lock->depth >= 0, and "++lock->depth == 0" is false.
>
>
>>> + spin_lock_bh(&lock->lock);
>>> + preempt_enable_no_resched();
>
> After this line, both softirqs and preempt are disabled.
>
> Future calls to this function temporarly raise preemptcount and decrease it.
> (Null effect)
>
>>> +}
>>> +EXPORT_SYMBOL_GPL(xt_info_rdlock_bh);
>>> +
>> Is this OK for you:
>>
>> void xt_info_rdlock_bh(void)
>> {
>> struct xt_info_lock *lock;
>>
>> local_bh_disable();
>
> well, Stephen was trying to not change preempt count for the 2nd, 3rd, 4th?... invocation of this function.
> This is how I understood the code.
>
>> lock = &__get_cpu_var(xt_info_locks);
>> if (likely(++lock->depth == 0))
>> spin_lock(&lock->lock);
>> }
>>
Sorry for it.
Is this OK:
void xt_info_rdlock_bh(void)
{
struct xt_info_lock *lock;
local_bh_disable();
lock = &__get_cpu_var(xt_info_locks);
if (likely(++lock->depth == 0))
spin_lock(&lock->lock);
else
local_bh_enable();
}
I did not think things carefully enough, and I do know
nothing about ip/ip6/arp.
Lai
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists