lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sun, 11 Sep 2011 10:35:49 +0900 From: Tejun Heo <tj@...nel.org> To: Thomas Tuttle <ttuttle@...omium.org> Cc: lkml <linux-kernel@...r.kernel.org> Subject: Re: [PATCH v2] workqueue: lock cwq access in drain_workqueue Hello, On Fri, Sep 09, 2011 at 07:00:53PM -0400, Thomas Tuttle wrote: > Take cwq->gcwq->lock to avoid racing between drain_workqueue checking > to make sure the workqueues are empty and cwq_dec_nr_in_flight > decrementing and then incrementing nr_active when it activates a > delayed work. Nice catch. Just few minor nits below. > We discovered this when a corner case in one of our drivers resulted in > us trying to destroy a workqueue in which the remaining work would > always requeue itself again in the same workqueue. We would hit this > race condition and trip the BUG_ON on workqueue.c:3080. > > Signed-off-by: Thomas Tuttle <ttuttle@...omium.org> > --- > Updated to use bool instead of int (d'oh), and CCed maintainer. > > kernel/workqueue.c | 8 +++++++- > 1 files changed, 7 insertions(+), 1 deletions(-) > > diff --git a/kernel/workqueue.c b/kernel/workqueue.c > index 25fb1b0..0c2e585 100644 > --- a/kernel/workqueue.c > +++ b/kernel/workqueue.c > @@ -2412,8 +2412,14 @@ reflush: > > for_each_cwq_cpu(cpu, wq) { > struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq); > + bool cwq_flushed; Maybe "drained" would be better? > - if (!cwq->nr_active && list_empty(&cwq->delayed_works)) > + spin_lock_irq(&cwq->gcwq->lock); > + cwq_flushed = !cwq->nr_active > + && list_empty(&cwq->delayed_works); and then this should fit inside 80 column, right? Thanks. -- tejun -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists