[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200219140243.wfljmupcrwm2jelo@e107158-lin>
Date: Wed, 19 Feb 2020 14:02:44 +0000
From: Qais Yousef <qais.yousef@....com>
To: Pavan Kondeti <pkondeti@...eaurora.org>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/3] sched/rt: fix pushing unfit tasks to a better CPU
On 02/17/20 13:53, Qais Yousef wrote:
> On 02/17/20 14:53, Pavan Kondeti wrote:
> > I notice a case where tasks would migrate for no reason (happens without this
> > patch also). Assuming BIG cores are busy with other RT tasks. Now this RT
> > task can go to *any* little CPU. There is no bias towards its previous CPU.
> > I don't know if it makes any difference but I see RT task placement is too
> > keen on reducing the migrations unless it is absolutely needed.
>
> In find_lowest_rq() there's a check if the task_cpu(p) is in the lowest_mask
> and prefer it if it is.
>
> But yeah I see it happening too
>
> https://imgur.com/a/FYqLIko
>
> Tasks on CPU 0 and 3 swap. Note that my tasks are periodic but the plots don't
> show that.
>
> I shouldn't have changed something to affect this bias. Do you think it's
> something I introduced?
>
> It's something maybe worth digging into though. I'll try to have a look.
FWIW, I dug a bit into this and I found out we have a thundering herd issue.
Since I just have a set of periodic task that all start together,
select_task_rq_rt() ends up selecting the same fitting CPU for all of them
(CPU1). The end up all waking up on CPU1, only to get pushed back out
again with only one surviving.
This reshuffles the task placement ending with some tasks being swapped.
I don't think this problem is specific to my change and could happen without
it.
The problem is caused by the way find_lowest_rq() selects a cpu in the mask
1750 best_cpu = cpumask_first_and(lowest_mask,
1751 sched_domain_span(sd));
1752 if (best_cpu < nr_cpu_ids) {
1753 rcu_read_unlock();
1754 return best_cpu;
1755 }
It always returns the first CPU in the mask. Or the mask could only contain
a single CPU too. The end result is that we most likely end up herding all the
tasks that wake up simultaneously to the same CPU.
I'm not sure how to fix this problem yet.
--
Qais Yousef
Powered by blists - more mailing lists