[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1314804041.3578.42.camel@twins>
Date: Wed, 31 Aug 2011 17:20:41 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Ripduman Sohan <ripduman.sohan@...cam.ac.uk>
Cc: tj@...nel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] workqueue: Restore cpus_allowed mask for sleeping
workqueue rescue threads
On Wed, 2011-08-31 at 14:17 +0100, Ripduman Sohan wrote:
> Rescuer threads may be migrated (and are bound) to particular CPUs when
> active. However, the allowed_cpus mask is not restored when they return
> to sleep rendering inconsistent the presented and actual set of CPUs the
> process may potentially run on. This patch fixes this oversight by
> recording the allowed_cpus mask for rescuer threads when it enters the
> rescuer_thread() main loop and restoring it every time the thread sleeps.
>
> Signed-off-by: Ripduman Sohan <ripduman.sohan@...cam.ac.uk>
> ---
> kernel/workqueue.c | 3 +++
> 1 files changed, 3 insertions(+), 0 deletions(-)
>
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 25fb1b0..0a4e785 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -2031,6 +2031,7 @@ static int rescuer_thread(void *__wq)
> struct list_head *scheduled = &rescuer->scheduled;
> bool is_unbound = wq->flags & WQ_UNBOUND;
> unsigned int cpu;
> + cpumask_t allowed_cpus = current->cpus_allowed;
except you cannot just allocate a cpumask_t like that on the stack,
those things can be massive.
> set_user_nice(current, RESCUER_NICE_LEVEL);
> repeat:
> @@ -2078,6 +2079,8 @@ repeat:
> spin_unlock_irq(&gcwq->lock);
> }
>
> + set_cpus_allowed_ptr(current, &allowed_cpus);
> +
> schedule();
> goto repeat;
> }
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists