[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070425144759.GA201@tv-sign.ru>
Date: Wed, 25 Apr 2007 18:47:59 +0400
From: Oleg Nesterov <oleg@...sign.ru>
To: Jarek Poplawski <jarkao2@...pl>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
David Howells <dhowells@...hat.com>
Subject: Re: Fw: [PATCH -mm] workqueue: debug possible endless loop in cancel_rearming_delayed_work
On 04/25, Oleg Nesterov wrote:
>
> On 04/25, Jarek Poplawski wrote:
> >
> > Probably this is also possible without timer i.e.
> > with queue_work.
>
> Yes, thanks. While adding cpu-hotplug check I forgot to add ->current_work
> check, which is needed to actually implement this
>
> > > Note that cancel_rearming_delayed_work() now can handle the works
> > > which re-arm itself via queue_work(), not only queue_delayed_work().
>
> part. I'll resend after fix.
Hm. But can't we do better? Looks like we don't need to check ->current_work,
void cancel_rearming_delayed_work(struct delayed_work *dwork)
{
struct work_struct *work = &dwork->work;
struct cpu_workqueue_struct *cwq = get_wq_data(work);
int done;
do {
done = 1;
spin_lock_irq(&cwq->lock);
if (!list_empty(&work->entry))
list_del_init(&work->entry);
else if (test_and_set_bit(WORK_STRUCT_PENDING, work_data_bits(work)))
done = del_timer(&dwork->timer)
spin_unlock_irq(&cwq->lock);
} while (!done);
/*
* Nobody can clear WORK_STRUCT_PENDING. This means that the
* work can't be re-queued and the timer can't be re-started.
*/
needs_a_good_name(cwq->wq, work);
work_clear_pending(work);
}
Jarek, I didn't think much about this, just a new idea. I am posting this code
in a hope you can review it while I sleep on this... CPU-hotplug is ignored for
now. Note that this version doesn't need the change in run_workqueue().
Oleg.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists