lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7h4msub0ca.fsf@deeprootsystems.com>
Date:	Wed, 17 Dec 2014 10:25:09 -0800
From:	Kevin Hilman <khilman@...nel.org>
To:	amit daniel kachhap <amit.daniel@...sung.com>
Cc:	Marek Szyprowski <m.szyprowski@...sung.com>,
	LAK <linux-arm-kernel@...ts.infradead.org>,
	"linux-samsung-soc\@vger.kernel.org" 
	<linux-samsung-soc@...r.kernel.org>,
	"Rafael J. Wysocki" <rjw@...ysocki.net>,
	Len Brown <len.brown@...el.com>,
	Ulf Hansson <ulf.hansson@...aro.org>,
	Tomasz Figa <tomasz.figa@...il.com>,
	Kukjin Kim <kgene.kim@...sung.com>,
	Sylwester Nawrocki <s.nawrocki@...sung.com>,
	Thomas Abraham <thomas.ab@...sung.com>,
	Pankaj Dubey <pankaj.dubey@...sung.com>,
	Geert Uytterhoeven <geert+renesas@...der.be>,
	"linux-pm\@vger.kernel.org" <linux-pm@...r.kernel.org>,
	"linux-kernel\@vger.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH RFC v3 1/2] PM / Domains: Extend API pm_genpd_dev_need_restore to use restore types

amit daniel kachhap <amit.daniel@...sung.com> writes:

> On Wed, Dec 17, 2014 at 3:40 AM, Kevin Hilman <khilman@...nel.org> wrote:
>> Marek Szyprowski <m.szyprowski@...sung.com> writes:
>>
>>> Hello,
>>>
>>> On 2014-12-13 17:51, Amit Daniel Kachhap wrote:
>>>> Instead of using bool to restore suspended devices initially, use flags
>>>> like GPD_DEV_SUSPEND_INIT, GPD_DEV_RESTORE_INIT and GPD_DEV_RESTORE_FORCE.
>>>> The first two flags will be similar to the existing true/false functionality.
>>>> The third flag may be used to force restore of suspended devices
>>>> whenever their associated power domain is turned on.
>>>>
>>>> Currently, PD power off function powers off all the associated unused
>>>> devices. The functionality added in this patch is similar to it.
>>>>
>>>> This feature may be used for those devices which are always in ON state
>>>> if the PD associated with them is ON but may need local runtime resume
>>>> and suspend during PD On/Off. These devices (like clock) may not implement
>>>> complete pm_runtime calls such as pm_runtime_get/pm_runtime_put due to
>>>> subsystems interaction behaviour or any other reason.
>>>>
>>>> The model works like,
>>>>      DEV1 (Attaches itself with PD but no calls to pm_runtime_get and
>>>>      /       pm_runtime_put. Its local runtime_suspend/resume is invoked via
>>>>     /        GPD_DEV_RESTORE_FORCE option)
>>>>    /
>>>> PD -- DEV2 (Implements complete PM runtime and calls pm_runtime_get and
>>>>    \ pm_runtime_put. This in turn invokes PD On/Off)
>>>>     \
>>>>      DEV3 (Similar to DEV1)
>>>>
>>>> Signed-off-by: Amit Daniel Kachhap <amit.daniel@...sung.com>
>>>
>>> The idea of adding new gen_pd flag and reusing runtime pm calls intead
>>> of additional notifiers looks promising, but I have some doubts.
>>
>> I agree, this is better than notifiers, but I have some doubts too.
>
> Thanks,
>
>>
>>> don't see any guarantee that devices with GPD_DEV_RESTORE_FORCE flag
>>> will be suspended after all "normal" devices and restored before
>>> them. Without such assumption it will be hard to use this approach for
>>> iommu related activities, because device might need to use (in its
>>> suspend/resume callbacks) the functionality provided by the other
>>> device with GPD_DEV_RESTORE_FORCE flag. Maybe some additional flags
>>> like suspend/resume priority (or more flags) will solve somehow this
>>> dependency.
>>
>> At a deeper level, the problem with this approach is that this is more
>> generically a runtime PM dependency problem, not a genpd problem.  For
>> example, what happens when the same kind of dependency exists on a
>> platform using a custom PM domain instead of genpd (like ACPI.) ?
>
> This patch does not try to solve runtime PM dependencies between
> devices. As an example, if there are three devices D1, D2, D3 in a
> power domain. Device D3 would update the power domain state
> requirement using runtime PM API but devices D1 and D2 do not want to
> control the domain but just want to be notified when the power domain
> state changes.

Yes, I understand that.  

The question is: what do you do when you have the same dependency
problem and you're not using genpd (for example, some SoCs have
implmeented their own PM domains, and ACPI devices are managed by their
own PM domain, not genpd.)

>> What's needed to solve this problem is a generalized way to have runtime
>> PM dependencies between devices.  Runtime PM already automatically
>> handles parent devices as one type of dependent device (e.g. a parent
>> device needs to be runtime PM resumed before its child.)  So what's
>> needed is a generic way to other PM dependencies with the runtime PM
>> core (not the genpd core.)
>
> Considering the example above with three devices, device D1 and D2 are
> passive components in this power domain. These devices only need to
> know the state changes of the power domains but would not control the
> power domain themselves nor put forth constraints in the power domain
> state changes. So I did not clearly understand as to how this example
> could be solved by introducing changes in runtime PM core.

Your solution only solves the problems for devices managed by genpd.

If I understood your example correctly, what you really want to solve
this problem more generically is to be able to tell the runtime PM core
that D3 has a dependency on D1 and D2.  Then, whenver the runtime PM
core is doing get/put operations for D3, it needs to also do them for D1
and D2.

This will accomplish the same as your proposed approach, but work for
any devices in any PM domains.

Kevin

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ