[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180705150311.GE32579@e108498-lin.cambridge.arm.com>
Date: Thu, 5 Jul 2018 16:03:11 +0100
From: Quentin Perret <quentin.perret@....com>
To: Morten Rasmussen <morten.rasmussen@....com>
Cc: peterz@...radead.org, mingo@...hat.com, valentin.schneider@....com,
dietmar.eggemann@....com, vincent.guittot@...aro.org,
gaku.inami.xh@...esas.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCHv4 11/12] sched/core: Disable SD_ASYM_CPUCAPACITY for
root_domains without asymmetry
On Thursday 05 Jul 2018 at 15:13:49 (+0100), Morten Rasmussen wrote:
> On Thu, Jul 05, 2018 at 02:31:43PM +0100, Quentin Perret wrote:
> > If SD_ASYM_CPUCAPACITY means that some CPUs have different
> > arch_scale_cpu_capacity() values, we could also automatically _set_
> > the flag in sd_init() no ? Why should we let the arch set it and just
> > correct it later ?
> >
> > I understand the moment at which we know the capacities of CPUs varies
> > from arch to arch, but the arch code could just call
> > rebuild_sched_domain when the capacities of CPUs change and let the
> > scheduler detect things automatically. I mean, even if the arch code
> > sets the flag in its topology level table, it will have to rebuild
> > the sched domains anyway ...
> >
> > What do you think ?
>
> We could as well set the flag here so the architecture doesn't have to
> do it. It is a bit more complicated though for few reasons:
>
> 1. Detecting when to disable the flag is a lot simpler than checking
> which level is should be set on. You basically have to work you way up
> from the lowest topology level until you get to a level spanning all the
> capacities available in the system to figure out where the flag should
> be set. I don't think this fits easily with how we build the
> sched_domain hierarchy. It can of course be done.
Ah, that is something I missed. I see in 1f6e6c7cb9bc ("sched/core:
Introduce SD_ASYM_CPUCAPACITY sched_domain topology flag") that this
flag should be set only at the _lowest_ level at which there is
asymmetry. I had the wrong impression that the flag was supposed to be
set at _all_ level where there is some asymmetry. And actually having
'some asymmetry' isn't enough, we want to see the full range of CPU
capacities. Hmmm, that is indeed more complex than I thought ... :/
>
> 2. As you say, we still need the arch code (or cpufreq?) to rebuild the
> whole thing once we know that the capacities have been determined. That
> currently implies implementing arch_update_cpu_topology() which is
> arch-specific. So we would need some arch code to make rebuild happen at
> the right point in time. If the rebuild should be triggering the rebuild
> we need another way to force a full rebuild. This can also be done.
Yeah, with just this patch the arch code will have to:
1. update the arch_scale_cpu_capacity() of the CPUs;
2. detect the asymmetry to set the flag in the topology table;
3. rebuild the sched domains to let the scheduler know about the new
topology information.
By doing what I suggest we would just save 2 from the arch side and do
it in the scheduler. So actually, I really start to wonder if it's worth
it given the added complexity ...
> 3. Detecting the flag in generic kernel/sched/* code means that all
> architectures will pay the for the overhead when building/rebuilding the
> sched_domain hierarchy, and all architectures that sets the cpu
> capacities to asymmetric will set the flag whether they like it or not.
> I'm not sure if this is a problem.
That is true as well ...
>
> In the end it is really about how much of this we want in generic code
> and how much we hide in arch/, and if we dare to touch the sched_domain
> build code ;-)
Right so you can argue that the arch code is here to give you a
system-level information, and that if the scheduler wants to virtually
split that system, then it's its job to make sure that happens properly.
That is exactly what your patch does (IIUC), and I now think that this
is a very sensible middle-ground option. But this is debatable so I'm
interested to see what others think :-)
Thanks,
Quentin
Powered by blists - more mailing lists