[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120420052633.GA16219@zhy>
Date: Fri, 20 Apr 2012 13:26:33 +0800
From: Yong Zhang <yong.zhang0@...il.com>
To: Stephen Boyd <sboyd@...eaurora.org>
Cc: linux-kernel@...r.kernel.org, Tejun Heo <tj@...nel.org>,
netdev@...r.kernel.org, Ben Dooks <ben-linux@...ff.org>
Subject: Re: [PATCH 1/2] workqueue: Catch more locking problems with
flush_work()
On Thu, Apr 19, 2012 at 11:36:32AM -0700, Stephen Boyd wrote:
> Does looking at the second patch help? Basically schedule_work() can run
> the callback right between the time the mutex is acquired and
> flush_work() is called:
>
> CPU0 CPU1
>
> <irq>
> schedule_work() mutex_lock(&mutex)
> <irq return>
> my_work() flush_work()
> mutex_lock(&mutex)
> <deadlock>
Get you point. It is a problem. But your patch could introduece false
positive since when flush_work() is called that very work may finish
running already.
So I think we need the lock_map_acquire()/lock_map_release() only when
the work is under processing, no?
Thanks,
Yong
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists