[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20130330091921.GB1544@minipsycho.orion>
Date: Sat, 30 Mar 2013 10:19:21 +0100
From: Jiri Pirko <jiri@...nulli.us>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Jiri Pirko <jpirko@...hat.com>,
Eric Dumazet <eric.dumazet@...il.com>,
Andy Gospodarek <andy@...yhouse.net>,
"David S. Miller" <davem@...emloft.net>,
LKML <linux-kernel@...r.kernel.org>,
netdev <netdev@...r.kernel.org>,
Nicolas de Pesloüan
<nicolas.2p.debian@...il.com>,
Thomas Gleixner <tglx@...utronix.de>,
Guy Streeter <streeter@...hat.com>,
"Paul E. McKenney" <paulmck@...ibm.com>, stephen@...workplumber.org
Subject: Re: [BUG] Crash with NULL pointer dereference in bond_handle_frame
in -rt (possibly mainline)
Fri, Mar 29, 2013 at 07:36:24PM CET, rostedt@...dmis.org wrote:
>On Fri, 2013-03-29 at 10:48 +0100, Jiri Pirko wrote:
>
>> Because, if rcu_dereference(dev->rx_handler) is null,
>> rcu_dereference(dev->rx_handler_data) is never done. Therefore I believe
>> we are hitting following scenario:
>>
>>
>> CPU0 CPU1
>> ---- ----
>> dev->rx_handler_data = NULL
>> rcu_read_lock()
>> dev->rx_handler = NULL
>>
>>
>
>That is not what is happening and that is not how RCU works. That is,
>rcu_read_lock() does not block nor does it really do much with ordering
>at all.
>
>The problem is totally contained within the rcu_read_lock() as well:
>
>
>If you have:
>
> rcu_read_lock();
> rx_handler = dev->rx_handler;
> rx_handler();
> rcu_read_unlock();
>
>where rx_handler references rx->rx_handler_data you need much more than
>making sure that rx->handler is set to null before rx_handler_data.
>
>The way RCU works is it lets things exist in a "dual state". Kind of
>like a Schödinger's cat. The solution Eric posted is a classic RCU
>example of how this works.
>
>When you set dev->rx_handler to NULL, there's two states that currently
>exist in the system. Those that still see dev->rx_handler set to
>something and those that see it set to NULL. You could put in memory
>barriers to your hearts content, but you will still have a system that
>sees things in a dual state. If you set dev->rx_handler_data to NULL,
>you risk those that see rx_handler as a function can still reference the
>rx_handler_data when it is NULL.
>
>Think of it this way:
>
> dev->rx_handler() {
>
>Once the function has been called, even if you set rx_handler() to NULL
>at this point, it makes no difference, even with memory barriers. This
>CPU is about to execute the previous value of rx_handler and there's
>nothing you can do to stop it. Setting rx_handler_data to NULL now can
>cause that CPU to reference the NULL pointer. There isn't a ordering
>problem where rx_handler_data got set to NULL first.
>
>But the beauty about RCU is the synchronize_*() functions, because that
>puts the system back into a single state. After the synchronization is
>complete, the entire system sees rx_handler() as NULL. There is no worry
>about setting rx_handler_data to NULL now because nothing will be
>referencing the previous value of rx_handler because that value no
>longer exists in the system.
>
>That means Eric's solution fits perfectly well here.
>
> < system in single state : everyone sees rx_handler = function() >
>
> rx_handler = NULL;
>
> < system in dual state : new calls see rx_handler = NULL, but
> current calls see rx_handler = function >
>
> synchronize_net();
>
> < system is back to single state: everyone sees rx_handler = NULL >
>
> rx_handler_data = NULL;
>
>no problem ;-)
>
>-- Steve
I think I understand now. I was under false impression that when rcu_read_lock()
is held, rcu_dereference(pointer) value is predetermined (for that
single run I mean).
Thank you very much for explanation!
Jiri
>
>
>
>
>--
>To unsubscribe from this list: send the line "unsubscribe netdev" in
>the body of a message to majordomo@...r.kernel.org
>More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists