lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 6 Jul 2017 11:59:21 +0100
From:   Juri Lelli <juri.lelli@....com>
To:     Viresh Kumar <viresh.kumar@...aro.org>
Cc:     Dietmar Eggemann <dietmar.eggemann@....com>,
        linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
        linux@....linux.org.uk,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Russell King <rmk+kernel@...linux.org.uk>,
        Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will.deacon@....com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Morten Rasmussen <morten.rasmussen@....com>,
        "Rafael J . Wysocki" <rjw@...ysocki.net>
Subject: Re: [PATCH v2 01/10] drivers base/arch_topology: free cpumask
 cpus_to_visit

Hi Viresh,

On 06/07/17 15:52, Viresh Kumar wrote:
> On 06-07-17, 10:49, Dietmar Eggemann wrote:

[...]

> >  static void parsing_done_workfn(struct work_struct *work)
> >  {
> > +	free_cpumask_var(cpus_to_visit);
> >  	cpufreq_unregister_notifier(&init_cpu_capacity_notifier,
> >  					 CPUFREQ_POLICY_NOTIFIER);
> 
> As a general rule (and good coding practice), it is better to free resources
> only after the users are gone. And so we should have changed the order here.
> i.e. Unregister the notifier first and then free the cpumask.
> 
> And because of that we may end up crashing the kernel here.
> 
> Here is an example:
> 
> Consider that init_cpu_capacity_callback() is getting called concurrently on big
> and LITTLE CPUs.
> 
> 
> CPU0 (big)                            CPU4 (LITTLE)
> 
>                                       if (cap_parsing_failed || cap_parsing_done)
>                                           return 0;
> 

But, in this case the policy notifier for LITTLE cluster has not been
executed yet, so the domain's CPUs have not yet been cleared out from
cpus_to_visit. CPU0 won't see the mask as empty then, right?

> cap_parsing_done = true;
> schedule_work(&parsing_done_work);
> 
> parsing_done_workfn(work)
>   -> free_cpumask_var(cpus_to_visit);
>   -> cpufreq_unregister_notifier()
> 
> 
>                                       switch (val) {
>                                           ...
>                                           /* Touch cpus_to_visit and crash */
> 
> 
> My assumption here is that the same notifier head can get called in parallel on
> two CPUs as all I see there is a down_read() in __blocking_notifier_call_chain()
> which shouldn't block parallel calls.
> 

If that's the case I'm wondering however if we need explicit
synchronization though. Otherwise both threads can read the mask as
full, clear only their bits and not schedule the workfn?

But, can the policies be concurrently initialized? Or is the
initialization process serialized or the different domains?

Thanks,

- Juri

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ