lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 19 Mar 2014 23:08:49 +0530
From:	"Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>
To:	Viresh Kumar <viresh.kumar@...aro.org>
CC:	"Rafael J. Wysocki" <rjw@...ysocki.net>,
	Lists linaro-kernel <linaro-kernel@...ts.linaro.org>,
	"cpufreq@...r.kernel.org" <cpufreq@...r.kernel.org>,
	"linux-pm@...r.kernel.org" <linux-pm@...r.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Amit Daniel <amit.daniel@...sung.com>
Subject: Re: [RFC v3] cpufreq: Make sure frequency transitions are serialized

On 03/19/2014 08:18 PM, Srivatsa S. Bhat wrote:
> On 03/19/2014 07:05 PM, Viresh Kumar wrote:
>> On 19 March 2014 17:45, Srivatsa S. Bhat
>> <srivatsa.bhat@...ux.vnet.ibm.com> wrote:
>>> diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h
>>> +       bool                    transition_ongoing; /* Tracks transition status */
>>> +       struct mutex            transition_lock;
>>> +       wait_queue_head_t       transition_wait;
>>
>> Similar to what I have done in my last version, why do you need
>> transition_ongoing and transition_wait? Simply work with
>> transition_lock? i.e. Acquire it for the complete transition sequence.
>>
> 
> We *can't* acquire it for the complete transition sequence
> in case of drivers that do asynchronous notification, because
> PRECHANGE is done in one thread and POSTCHANGE is done in a
> totally different thread! You can't acquire a lock in one
> task and release it in a different task. That would be a
> fundamental violation of locking.
> 
> That's why I introduced the wait queue to help us create
> a "flow" which encompasses 2 different, but co-ordinating
> tasks. You simply can't do that elegantly by using plain
> locks alone.
> 

By the way, note the updated changelog in my patch. It includes a brief
overview of the synchronization design, which is copy-pasted below for
reference. I forgot to mention this earlier!

-----

This patch introduces a set of synchronization primitives to serialize
frequency transitions, which are to be used as shown below:

cpufreq_freq_transition_begin();

//Perform the frequency change

cpufreq_freq_transition_end();

The _begin() call sends the PRECHANGE notification whereas the _end() call
sends the POSTCHANGE notification. Also, all the necessary synchronization
is handled within these calls. In particular, even drivers which set the
ASYNC_NOTIFICATION flag can also use these APIs for performing frequency
transitions (ie., you can call _begin() from one task, and call the
corresponding _end() from a different task).

The actual synchronization underneath is not that complicated:

The key challenge is to allow drivers to begin the transition from one thread
and end it in a completely different thread (this is to enable drivers that do
asynchronous POSTCHANGE notification from bottom-halves, to also use the same
interface).

To achieve this, a 'transition_ongoing' flag, a 'transition_lock' mutex and a
wait-queue are added per-policy. The flag and the wait-queue are used in
conjunction to create an "uninterrupted flow" from _begin() to _end(). The
mutex-lock is used to ensure that only one such "flow" is in flight at any
given time. Put together, this provides us all the necessary synchronization.

Regards,
Srivatsa S. Bhat

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists