[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200221111223.bb4y57zdox7qdy2w@e107158-lin.cambridge.arm.com>
Date: Fri, 21 Feb 2020 11:12:23 +0000
From: Qais Yousef <qais.yousef@....com>
To: Pavan Kondeti <pkondeti@...eaurora.org>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/3] sched/rt: fix pushing unfit tasks to a better CPU
On 02/21/20 13:45, Pavan Kondeti wrote:
> On Wed, Feb 19, 2020 at 02:02:44PM +0000, Qais Yousef wrote:
> > On 02/17/20 13:53, Qais Yousef wrote:
> > > On 02/17/20 14:53, Pavan Kondeti wrote:
> > > > I notice a case where tasks would migrate for no reason (happens without this
> > > > patch also). Assuming BIG cores are busy with other RT tasks. Now this RT
> > > > task can go to *any* little CPU. There is no bias towards its previous CPU.
> > > > I don't know if it makes any difference but I see RT task placement is too
> > > > keen on reducing the migrations unless it is absolutely needed.
> > >
> > > In find_lowest_rq() there's a check if the task_cpu(p) is in the lowest_mask
> > > and prefer it if it is.
> > >
> > > But yeah I see it happening too
> > >
> > > https://imgur.com/a/FYqLIko
> > >
> > > Tasks on CPU 0 and 3 swap. Note that my tasks are periodic but the plots don't
> > > show that.
> > >
> > > I shouldn't have changed something to affect this bias. Do you think it's
> > > something I introduced?
> > >
> > > It's something maybe worth digging into though. I'll try to have a look.
> >
> > FWIW, I dug a bit into this and I found out we have a thundering herd issue.
> >
> > Since I just have a set of periodic task that all start together,
> > select_task_rq_rt() ends up selecting the same fitting CPU for all of them
> > (CPU1). The end up all waking up on CPU1, only to get pushed back out
> > again with only one surviving.
> >
> > This reshuffles the task placement ending with some tasks being swapped.
> >
> > I don't think this problem is specific to my change and could happen without
> > it.
> >
> > The problem is caused by the way find_lowest_rq() selects a cpu in the mask
> >
> > 1750 best_cpu = cpumask_first_and(lowest_mask,
> > 1751 sched_domain_span(sd));
> > 1752 if (best_cpu < nr_cpu_ids) {
> > 1753 rcu_read_unlock();
> > 1754 return best_cpu;
> > 1755 }
> >
> > It always returns the first CPU in the mask. Or the mask could only contain
> > a single CPU too. The end result is that we most likely end up herding all the
> > tasks that wake up simultaneously to the same CPU.
> >
> > I'm not sure how to fix this problem yet.
> >
>
> Yes, I have seen this problem too. This is not limited to RT even fair class
> (find_energy_efficient_cpu path) also have the same issue. There is a window
> where we select a CPU for the task and the task being queued there. Because of
> this, we may select the same CPU for two successive waking tasks. Turning off
> TTWU_QUEUE sched feature addresses this up to some extent. At least it would
> solve the cases like multiple tasks getting woken up from an interrupt handler.
Oh, handy. Let me try this out.
I added it to my to-do to investigate it when I have time anyway.
In modern systems where L3 is spanning all CPUs, the migration isn't that
costly, but it'd still be unnecessary wakeup latency that can add up.
Thanks
--
Qais Yousef
Powered by blists - more mailing lists