[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180219135644.GG25181@hirez.programming.kicks-ass.net>
Date: Mon, 19 Feb 2018 14:56:44 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Morten Rasmussen <morten.rasmussen@....com>
Cc: mingo@...hat.com, valentin.schneider@....com,
dietmar.eggemann@....com, vincent.guittot@...aro.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/7] sched/fair: Add group_misfit_task load-balance type
On Thu, Feb 15, 2018 at 04:20:49PM +0000, Morten Rasmussen wrote:
> @@ -6733,9 +6758,12 @@ done: __maybe_unused
> if (hrtick_enabled(rq))
> hrtick_start_fair(rq, p);
>
> + update_misfit_status(p, rq);
> +
> return p;
>
> idle:
> + update_misfit_status(NULL, rq);
> new_tasks = idle_balance(rq, rf);
>
> /*
So we set a point when picking a task (or tick). We clear said pointer
when idle.
> @@ -7822,6 +7855,10 @@ static inline void update_sg_lb_stats(struct lb_env *env,
> */
> if (!nr_running && idle_cpu(i))
> sgs->idle_cpus++;
> +
> + if (env->sd->flags & SD_ASYM_CPUCAPACITY &&
> + !sgs->group_misfit_task_load && rq->misfit_task_load)
> + sgs->group_misfit_task_load = rq->misfit_task_load;
> }
>
> /* Adjust by relative CPU capacity of the group */
And we read said pointer from another CPU, without holding the
respective rq->lock.
What happens, if right after we set sgs->group_misfit_task_load, our
task decides to exit?
Powered by blists - more mailing lists