[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20180528010425.GA64067@joelaf.mtv.corp.google.com>
Date: Sun, 27 May 2018 18:04:25 -0700
From: Joel Fernandes <joel@...lfernandes.org>
To: Juri Lelli <juri.lelli@...hat.com>
Cc: peterz@...radead.org, mingo@...hat.com,
Dietmar Eggemann <dietmar.eggemann@....com>,
Patrick Bellasi <patrick.bellasi@....com>,
linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH] kernel/sched/topology: Clarify root domain(s) debug
string
On Thu, May 24, 2018 at 05:29:36PM +0200, Juri Lelli wrote:
> When scheduler debug is enabled, building scheduling domains outputs
> information about how the domains are laid out and to which root domain
> each CPU (or sets of CPUs) belongs, e.g.:
>
> CPU0 attaching sched-domain(s):
> domain-0: span=0-5 level=MC
> groups: 0:{ span=0 }, 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }
> CPU1 attaching sched-domain(s):
> domain-0: span=0-5 level=MC
> groups: 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 0:{ span=0 }
>
> [...]
>
> span: 0-5 (max cpu_capacity = 1024)
>
> The fact that latest line refers to CPUs 0-5 root domain doesn't however look
last line?
> immediately obvious to me: one might wonder why span 0-5 is reported "again".
>
> Make it more clear by adding "root domain" to it, as to end with the
> following.
>
> CPU0 attaching sched-domain(s):
> domain-0: span=0-5 level=MC
> groups: 0:{ span=0 }, 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }
> CPU1 attaching sched-domain(s):
> domain-0: span=0-5 level=MC
> groups: 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 0:{ span=0 }
>
> [...]
>
> root domain span: 0-5 (max cpu_capacity = 1024)
>
> Signed-off-by: Juri Lelli <juri.lelli@...hat.com>
I played the sched_load_balance flag to trigger this and it makes sense to
improve the print with 'root domain'.
Reviewed-by: Joel Fernandes (Google) <joel@...lfernandes.org>
One thing I believe is a bit weird is sched_load_balance also can affect the
wake-up path, because a NULL sd is attached to the rq if sched_load_balance
is set to 0.
This could turn off the "for_each_domain(cpu, tmp)" loop in
select_task_rq_fair and hence we would always end up in the
select_idle_sibling path for those CPUs.
It also means that "XXX always" can/should be removed because sd can very
well be NULL for other sd_flag types as well, not just sd_flag ==
SD_BALANCE_WAKE. I'll send a patch to remove that comment as I just tested
this is true.
thanks,
- Joel
Powered by blists - more mailing lists