[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120211002349.GN19392@google.com>
Date: Fri, 10 Feb 2012 16:23:49 -0800
From: Tejun Heo <tj@...nel.org>
To: Alan Stern <stern@...land.harvard.edu>
Cc: Jens Axboe <axboe@...nel.dk>, "Rafael J. Wysocki" <rjw@...k.pl>,
Linux-pm mailing list <linux-pm@...r.kernel.org>,
Kernel development list <linux-kernel@...r.kernel.org>
Subject: Re: Bug in disk event polling
Hello,
On Fri, Feb 10, 2012 at 04:44:48PM -0500, Alan Stern wrote:
> > I think it should be nrt. It assumes that no one else is running it
> > concurrently; otherwise, multiple CPUs could jump into
> > disk->fops->check_events() concurrently which can be pretty ugly.
>
> Come to mention it, how can a single work item ever run on more than
> one CPU concurrently? Are you concerned about cases where some other
> thread requeues the work item while it is executing?
Yeah, there are multiple paths which may queue the work item. For
polling work, it definitely was possible but maybe locking changes
afterwards removed that. Even then, it would be better to use nrt wq
as bug caused that way would be very difficult to track down.
> The problem is that these async threads generally aren't freezable.
> They will continue to run and do I/O while a system goes through a
> sleep transition. How should this be handled?
I think it would be better to use wq for most kthreads. A lot of them
aren't strictly correct in the way they deal with
kthread_should_stop() and freezing. kthread in general simply seems
way too difficult to use correctly.
> kthread_run() can be adjusted on a case-by-case basis, by inserting
> calls to set_freezable() and try_to_freeze() at the appropriate places.
> But what about async_schedule()?
Given the stuff async is used for, maybe just make all async execution
freezable?
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists