[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2375c9f91003120511j6f33592cl12cb2617a27351ec@mail.gmail.com>
Date: Fri, 12 Mar 2010 21:11:02 +0800
From: Américo Wang <xiyou.wangcong@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: David Miller <davem@...emloft.net>, paulmck@...ux.vnet.ibm.com,
peterz@...radead.org, linux-kernel@...r.kernel.org,
netdev@...r.kernel.org
Subject: Re: 2.6.34-rc1: rcu lockdep bug?
On Fri, Mar 12, 2010 at 7:11 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> Le vendredi 12 mars 2010 à 16:59 +0800, Américo Wang a écrit :
>> On Fri, Mar 12, 2010 at 4:07 PM, David Miller <davem@...emloft.net> wrote:
>> > From: Américo Wang <xiyou.wangcong@...il.com>
>> > Date: Fri, 12 Mar 2010 15:56:03 +0800
>> >
>> >> Ok, after decoding the lockdep output, it looks like that
>> >> netif_receive_skb() should call rcu_read_lock_bh() instead of rcu_read_lock()?
>> >> But I don't know if all callers of netif_receive_skb() are in softirq context.
>> >
>> > Normally, netif_receive_skb() is invoked from softirq context.
>> >
>> > However, via netpoll it can be invoked essentially from any context.
>> >
>> > But, when this happens, the networking receive path makes amends such
>> > that this works fine. That's what the netpoll_receive_skb() check in
>> > netif_receive_skb() is for. That check makes it bail out early if the
>> > call to netif_receive_skb() is via a netpoll invocation.
>> >
>>
>> Oh, I see. This means we should call rcu_read_lock_bh() instead.
>> If Paul has no objections, I will send a patch for this.
>>
>
> Nope, its calling rcu_read_lock() from interrupt context and it should
> stay as is (we dont need to disable bh, this has a cpu cost)
>
Oh, but lockdep complains about rcu_read_lock(), it said
rcu_read_lock() can't be used in softirq context.
Am I missing something?
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists