[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110103150052.GU18831@htj.dyndns.org>
Date: Mon, 3 Jan 2011 16:00:52 +0100
From: Tejun Heo <tj@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>, "Rafael J. Wysocki" <rjw@...k.pl>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH UPDATED] workqueue: relax lockdep annotation on
flush_work()
Hello,
On Mon, Jan 03, 2011 at 03:54:50PM +0100, Peter Zijlstra wrote:
> On Mon, 2011-01-03 at 15:17 +0100, Tejun Heo wrote:
>
> > @@ -2384,8 +2384,18 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr,
> > insert_wq_barrier(cwq, barr, work, worker);
> > spin_unlock_irq(&gcwq->lock);
> >
> > - lock_map_acquire(&cwq->wq->lockdep_map);
> > + /*
> > + * If @max_active is 1 or rescuer is in use, flushing another work
> > + * item on the same workqueue may lead to deadlock. Make sure the
> > + * flusher is not running on the same workqueue by verifying write
> > + * access.
> > + */
> > + if (cwq->wq->saved_max_active == 1 || cwq->wq->flags & WQ_RESCUER)
> > + lock_map_acquire(&cwq->wq->lockdep_map);
> > + else
> > + lock_map_acquire_read(&cwq->wq->lockdep_map);
> > lock_map_release(&cwq->wq->lockdep_map);
> > +
> > return true;
> > already_gone:
> > spin_unlock_irq(&gcwq->lock);
>
> Ah, but this violates the rule that you must always use the most strict
> constraints. Code doesn't know if it will run in a rescue thread or not,
> hence it must assume it does.
Hmmm? The code applies the most strict contraints. If the workqueue
has a rescuer, flushing another work from the workqueue will always
trigger lockdep warning. The rule is relaxed only for workqueues
which aren't used for memory reclaiming && support parallel execution.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists