lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKohponGN4JNwLKLjvFOPkru0=6JajOKKH=oJneDkKp=PyAmRA@mail.gmail.com>
Date:	Fri, 14 Mar 2014 22:37:27 +0530
From:	Viresh Kumar <viresh.kumar@...aro.org>
To:	Dirk Brandewie <dirk.brandewie@...il.com>
Cc:	Dirk Brandewie <dirk.j.brandewie@...el.com>,
	Linux PM list <linux-pm@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"Rafael J. Wysocki" <rjw@...ysocki.net>,
	Patrick Marlier <patrick.marlier@...il.com>
Subject: Re: [PATCH 0/2] Add exit_prepare callback to the cpufreq_driver interface.

On 14 March 2014 20:40, Dirk Brandewie <dirk.brandewie@...il.com> wrote:
> Are you proposing adding cpufreq_generic_suspend() to the core I can not
> find
> it in the mainline code.

Its already there in linux-next. I am suggesting to reuse that
infrastructure with
some necessary modification to support both suspend and hotplug.

>> Over that I don't think Dirk's solution is going to work if we twist
>> the systems a bit.
>
> Could you explain what "twist the systems a bit" means?

The one I explained in the below paragraph.

>> For example, Dirk probably wanted to set P-STATE of every core to MIN
>> when it goes down. But his solution probably doesn't do that right now.
>>
>
> No, I wanted to set the core that was being off-lined to min P state.

Sorry, probably my words were a bit confusing. I meant exactly what
you just wrote. Core going down will set its freq to min.

>> As exit() is called only when the policy expires or all the CPUs of that
>> policy
>> are down. Suppose only one CPU out of 4 goes down from a policy, then
>> pstate driver would never know that happened. And that core wouldn't go
>> to min state.
>
> My patch does not change the semantics of exit() or when it is called.  For
> intel_pstate their is one cpu per policy so I moved all the cleanup to

I didn't knew that its guaranteed by pstate driver. I thought it would still be
hardware dependent as some cores might share clock line.

> exit_prepare() if there were work I needed to do at CPU_POST_DEAD I would
> have
> continued to export the *optional* exit callback.
>
> The callback name exit_prepare() was probably unfortunate and might be
> causing
> some confusion.  I will be changing the name per Rafael's feedback.

Don't know.. There is another problem here that exit_prepare() would be called
for each CPU whereas exit() would be called for each policy.

And I strongly feel that we shouldn't give another callback here but instead
just set core to a specific freq as mentioned by driver with some other field.

>> I think we have two solutions here:
>> - If its just about setting core a particular freq when it goes down, I
>> think it
>> looks a generic enough problem and so better fix core for that. Probably
>> with
>> help of flags field/suspend-freq (maybe renamed) and without calling
>> drivers
>> exit() at all..
>
>
> ATM the only thing that needs to be done in this case is to allow
> intel_pstate
> to set the P state on the core when it is going done.  My solution from the
> cores point of view is more generic, it allows any driver that needs to do
> work
> during CPU_DOWN_PREPARE to do it without adding any new logic to the core.

Yeah, do we really need to give that freedom right now? Would be better
to get this into core as that would be more generic and people looking to set
core to some freq at shutdown wouldn't be replicating that code.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ