[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191015115632.GC242992@google.com>
Date: Tue, 15 Oct 2019 12:56:32 +0100
From: Quentin Perret <qperret@...gle.com>
To: Valentin Schneider <valentin.schneider@....com>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
peterz@...radead.org, vincent.guittot@...aro.org,
Dietmar.Eggemann@....com, morten.rasmussen@....com,
stable@...r.kernel.org
Subject: Re: [PATCH v2] sched/topology: Allow sched_asym_cpucapacity to be
disabled
On Tuesday 15 Oct 2019 at 12:49:22 (+0100), Valentin Schneider wrote:
>
>
> On 15/10/2019 11:58, Valentin Schneider wrote:
> > On 15/10/2019 11:40, Quentin Perret wrote:
> >>> @@ -2124,8 +2124,17 @@ static void detach_destroy_domains(const struct cpumask *cpu_map)
> >>> int i;
> >>>
> >>> rcu_read_lock();
> >>> +
> >>> + if (static_key_enabled(&sched_asym_cpucapacity)) {
> >>> + unsigned int cpu = cpumask_any(cpu_map);
> >>> +
> >>> + if (rcu_dereference(per_cpu(sd_asym_cpucapacity, cpu)))
> >>> + static_branch_dec_cpuslocked(&sched_asym_cpucapacity);
> >>
> >> Lockdep should scream for this :)
> >
> > Bleh, yes indeed...
> >
>
> Urgh, I forgot about the funny hotplug lock scenario at boot time.
> rebuild_sched_domains() takes the lock but sched_init_domains() doesn't, so
> we don't get the might_sleep warn at boot time.
>
> So if we want to flip the key post boot time we probably need to separately
> count our asymmetric root domains and flip the key after all the rebuilds,
> outside of the hotplug lock.
Hmm, a problem here is that static_branch*() can block (it uses a
mutex) while you're in the rcu section, I think.
I suppose you could just move this above rcu_read_lock() and use
rcu_access_pointer() instead ?
Thanks,
Quentin
Powered by blists - more mailing lists