[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130306191408.GN1227@htj.dyndns.org>
Date: Wed, 6 Mar 2013 11:14:08 -0800
From: Tejun Heo <tj@...nel.org>
To: Lei Wen <adrian.wenl@...il.com>
Cc: linux-kernel@...r.kernel.org, leiwen@...vell.com
Subject: Re: workqueue panic in 3.4 kernel
Hello, Lei.
On Wed, Mar 06, 2013 at 10:39:15PM +0800, Lei Wen wrote:
> We find a race condition as below:
> CPU0 CPU1
> timer interrupt happen
> __run_timers
> __run_timers::spin_lock_irq(&base->lock)
> __run_timers::spin_unlock_irq(&base->lock)
>
> __cancel_work_timer
>
> __cancel_work_timer::del_timer
>
> __cancel_work_timer::wait_on_work
>
> __cancel_work_timer::clear_work_data
> __run_timers::call_timer_fn(timer, fn, data);
> delayed_work_timer_fn::get_work_cwq
> __run_timers::spin_lock_irq(&base->lock)
>
> It is possible for __cancel_work_timer to be run over cpu1 __BEFORE__
> cpu0 is ready to
> run the timer callback, which is delayed_work_timer_fn in our case.
If del_timer() happens after the timer starts running, del_timer()
would return NULL and try_to_grab_pending() will be called which will
return >=0 iff if successfully steals the PENDING bit (ie. it's the
sole owner of the work item). If del_timer() happens before the timer
starts running, the timer function would never run.
clear_work_data() happens iff the work item is confirmed to be idle.
At this point, I'm pretty skeptical this is a bug in workqueue itself
and strongly suggest looking at the crashing workqueue user.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists