lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 08 Jan 2014 01:14:11 +0100
From:	"Rafael J. Wysocki" <rjw@...ysocki.net>
To:	Viresh Kumar <viresh.kumar@...aro.org>
Cc:	linaro-kernel@...ts.linaro.org, patches@...aro.org,
	cpufreq@...r.kernel.org, linux-pm@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH Resend 1/4] cpufreq: stats: handle cpufreq_unregister_driver() and suspend/resume properly

On Tuesday, January 07, 2014 07:10:10 AM Viresh Kumar wrote:
> There are several problems cpufreq stats in the way it handles
> cpufreq_unregister_driver() and suspend/resume..
> 
> - We must not loose data collected so far when suspend/resume happens and so
>   stats directories must not be removed/allocated during these operations, Which
>   is done currently.
> 
> - cpufreq_stat has registered notifiers with both cpufreq and hotplug. It adds
>   sysfs stats directory with a cpufreq notifier: CPUFREQ_NOTIFY and removes this
>   directory with a notifier from hotplug core.
> 
>   In case cpufreq_unregister_driver() is called (on rmmod cpufreq driver), stats
>   directories per cpu aren't removed as CPUs are still online. The only call
>   cpufreq_stats gets is cpufreq_stats_update_policy_cpu for all CPUs except the
>   last of each policy. And pointer to stat information is stored in the entry
>   for last cpu in per-cpu cpufreq_stats_table. But policy structure would be
>   freed inside cpufreq core and so that will result in memory leak inside
>   cpufreq stats (as we are never freeing memory for stats).
> 
>   Now if we again insert the module cpufreq_register_driver() will be called and
>   we will again allocate stats data and put it on for first cpu of every policy.
>   In case we only have a single cpu per policy we will return with a error from
>   cpufreq_stats_create_table() due to below code:
> 
> 	if (per_cpu(cpufreq_stats_table, cpu))
> 		return -EBUSY;
> 
>   And so probably cpufreq stats directory would show up anymore (as it was added
>   inside last policies->kobj which doesn't exist anymore). I haven't tested it
>   though. Also the values in stats files wouldn't be refreshed as we are using
>   the earlier stats structure.
> 
> - CPUFREQ_NOTIFY is called from cpufreq_set_policy() which is called for
>   scenarios where we don't really want cpufreq_stat_notifier_policy() to get
>   called. For example whenever we are changing anything related to a policy:
>   min/max/current freq, etc.. cpufreq_set_policy() is called and so cpufreq
>   stats is notified. Where we don't do any useful stuff other than simply
>   returning with -EBUSY from cpufreq_stats_create_table(). And so this isn't the
>   right notifier that cpufreq stats..
> 
> Due to all above reasons this patch does following changes:
> - Add new notifiers CPUFREQ_CREATE_POLICY and CPUFREQ_REMOVE_POLICY, which are
>   only called when policy is created/destroyed. They aren't called for
>   suspend/resume paths..
> - Use these notifiers in cpufreq_stat_notifier_policy() to create/destory stats
>   sysfs entries. And so cpufreq_unregister_driver() or suspend/resume shouldn't
>   be a problem for cpufreq_stats.
> - Return early from cpufreq_stat_cpu_callback() for suspend/resume sequence, so
>   that we don't free stats structure.
> 
> Acked-and-tested-by: Nicolas Pitre <nico@...aro.org>
> Signed-off-by: Viresh Kumar <viresh.kumar@...aro.org>

The entire series applied to bleeding-edge, thanks!

-- 
I speak only for myself.
Rafael J. Wysocki, Intel Open Source Technology Center.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ