[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1294066490.2016.81.camel@laptop>
Date: Mon, 03 Jan 2011 15:54:50 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Tejun Heo <tj@...nel.org>
Cc: Ingo Molnar <mingo@...hat.com>, "Rafael J. Wysocki" <rjw@...k.pl>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH UPDATED] workqueue: relax lockdep annotation on
flush_work()
On Mon, 2011-01-03 at 15:17 +0100, Tejun Heo wrote:
> @@ -2384,8 +2384,18 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr,
> insert_wq_barrier(cwq, barr, work, worker);
> spin_unlock_irq(&gcwq->lock);
>
> - lock_map_acquire(&cwq->wq->lockdep_map);
> + /*
> + * If @max_active is 1 or rescuer is in use, flushing another work
> + * item on the same workqueue may lead to deadlock. Make sure the
> + * flusher is not running on the same workqueue by verifying write
> + * access.
> + */
> + if (cwq->wq->saved_max_active == 1 || cwq->wq->flags & WQ_RESCUER)
> + lock_map_acquire(&cwq->wq->lockdep_map);
> + else
> + lock_map_acquire_read(&cwq->wq->lockdep_map);
> lock_map_release(&cwq->wq->lockdep_map);
> +
> return true;
> already_gone:
> spin_unlock_irq(&gcwq->lock);
Ah, but this violates the rule that you must always use the most strict
constraints. Code doesn't know if it will run in a rescue thread or not,
hence it must assume it does.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists