lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 19 Mar 2014 01:45:18 +0530
From:	"Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>
To:	Dirk Brandewie <dirk.brandewie@...il.com>
CC:	dirk.j.brandewie@...el.com, Viresh Kumar <viresh.kumar@...aro.org>,
	Linux PM list <linux-pm@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"Rafael J. Wysocki" <rjw@...ysocki.net>,
	Patrick Marlier <patrick.marlier@...il.com>
Subject: Re: [PATCH v2 2/2] intel_pstate: Set core to min P state during core
 offline

On 03/19/2014 01:14 AM, Dirk Brandewie wrote:
> On 03/18/2014 11:52 AM, Srivatsa S. Bhat wrote:
>> On 03/18/2014 08:31 PM, Dirk Brandewie wrote:
>>> On 03/17/2014 10:44 PM, Viresh Kumar wrote:
>>>> On Sat, Mar 15, 2014 at 2:33 AM,  <dirk.brandewie@...il.com> wrote:
>>>>> +
>>>>>    static int intel_pstate_cpu_init(struct cpufreq_policy *policy)
>>>>>    {
>>>>>           struct cpudata *cpu;
>>>>> @@ -818,7 +824,7 @@ static struct cpufreq_driver
>>>>> intel_pstate_driver = {
>>>>>           .setpolicy      = intel_pstate_set_policy,
>>>>>           .get            = intel_pstate_get,
>>>>>           .init           = intel_pstate_cpu_init,
>>>>> -       .exit           = intel_pstate_cpu_exit,
>>>>> +       .stop           = intel_pstate_cpu_stop,
>>>>
>>>> Probably, keep exit as is and only change P-state in stop(). So that
>>>> allocation of resources happen in init() and they are freed in exit()?
>>>>
>>> I looked at doing just that but it junked up the code.  if stop() is
>>> called
>>> during PREPARE then init() will be called via __cpufreq_add_dev() in the
>>> ONLINE
>>> and DOWN_FAILED case. So once stop() is called the driver will be
>>> ready for
>>> init() to be called exactly like when exit() is called.
>>>
>>
>> I'm sorry, but that didn't make much sense to me. Can you be a little
>> more specific as to what problems you hit while trying to have a
>> ->stop() which sets min P state and a separate ->exit() which frees
>> the resources? I think we can achieve this with almost no trouble.
>>
> 
> There was no problem per se.  In stop() all I really needed to do is
> stop the
> timer and set the P state to MIN.
> 
> At init time I need to allocate memory and start timer.  If stopping the
> timer
> and deallocating memory are separated then I need code in init() to detect
> this case.
> 
> Moving all the clean up to stop() make my code simpler, covers the
> failure case and maintains the behaviour expected by the core.
> 
>> If you ignore the failure case (such as DOWN_FAILED) for now, do you
>> still see any serious roadblocks?
> 
> Why would I ignore a valid failure case?
> 

Of course you shouldn't ignore it. I was just trying to make it easier
to think about the design without complicating it with arcane failure
cases right at the outset, that's all.

Now that I looked at it again, I see your point. The problem is that
by the DOWN_PREPARE stage, the core would have completed only half the
tear-down (via __cpufreq_remove_dev_prepare()), but on failure, it tries
to do a full init (via __cpufreq_add_dev()). I would say that's actually
not a great design from the cpufreq core perspective, but perhaps we can
fix it at a later point in time if it is that painful to endure.

So yes, now I understand see why you do all the teardown in ->stop(),
to workaround the somewhat inconvenient rollback performed by the
cpufreq core. Your approach looks good to me.

Regards,
Srivatsa S. Bhat

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ