[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1456261535.15241.96.camel@decadent.org.uk>
Date: Tue, 23 Feb 2016 21:05:35 +0000
From: Ben Hutchings <ben@...adent.org.uk>
To: Mike Galbraith <umgwanakikbuti@...il.com>,
Byungchul Park <byungchul.park@....com>,
Greg KH <gregkh@...uxfoundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>, stable@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [STABLE] kernel oops which can be fixed by peterz's patches
On Wed, 2016-02-17 at 04:02 +0100, Mike Galbraith wrote:
[...]
> @stable: Kernels that predate SCHED_DEADLINE can use this simple (and tested)
> check in lieu of backport of the full 18 patch mainline treatment.
>
> Signed-off-by: Mike Galbraith <umgwanakikbuti@...il.com>
> ---
> kernel/sched/fair.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4008,6 +4008,7 @@ int can_migrate_task(struct task_struct
> * 2) cannot be migrated to this CPU due to cpus_allowed, or
> * 3) running (obviously), or
> * 4) are cache-hot on their current CPU.
> + * 5) p->pi_lock is held.
> */
> if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
> return 0;
> @@ -4049,6 +4050,14 @@ int can_migrate_task(struct task_struct
> }
>
> /*
> + * rt -> fair class change may be in progress. If we sneak in should
> + * double_lock_balance() release rq->lock, and move the task, we will
> + * cause switched_to_fair() to meet a passed but no longer valid rq.
> + */
> + if (raw_spin_is_locked(&p->pi_lock))
> + return 0;
> +
> + /*
> * Aggressive migration if:
> * 1) task is cache cold, or
> * 2) too many balance attempts have failed.
Queued up for 3.2, thanks.
Ben.
--
Ben Hutchings
Any smoothly functioning technology is indistinguishable from a rigged demo.
Download attachment "signature.asc" of type "application/pgp-signature" (812 bytes)
Powered by blists - more mailing lists