[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091115234031.GB6090@nowhere>
Date: Mon, 16 Nov 2009 00:40:33 +0100
From: Frederic Weisbecker <fweisbec@...il.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Oleg Nesterov <oleg@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"Rafael J. Wysocki" <rjw@...k.pl>, Mike Galbraith <efault@....de>,
Ingo Molnar <mingo@...e.hu>,
LKML <linux-kernel@...r.kernel.org>,
pm list <linux-pm@...ts.linux-foundation.org>,
Greg KH <gregkh@...e.de>,
Jesse Barnes <jbarnes@...tuousgeek.org>,
Tejun Heo <tj@...nel.org>
Subject: Re: GPF in run_workqueue()/list_del_init(cwq->worklist.next) on
resume (was: Re: Help needed: Resume problems in 2.6.32-rc, perhaps
related to preempt_count leakage in keventd)
On Mon, Nov 16, 2009 at 12:37:06AM +0100, Frederic Weisbecker wrote:
> On Thu, Nov 12, 2009 at 06:33:00PM +0100, Thomas Gleixner wrote:
> > @@ -145,6 +255,7 @@ static void __queue_work(struct cpu_work
> > {
> > unsigned long flags;
> >
> > + debug_work_activate(work);
> > spin_lock_irqsave(&cwq->lock, flags);
> > insert_work(cwq, work, &cwq->worklist);
>
>
>
> Since you are doing that from insert_wq_barrier too, which
> endpoint is also insert_work(), why not put debug_work_activate
> there instead? Or may be you really prefer to do this outside
> the spinlock (which in off-case is zero-overhead). May be that
> can sleep or?
/me now remembers this path can't sleep since we can queue a
work from anywhere...so I guess this is to not bloat the lock
overhead.
Whatever, this is really a small detail.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists