lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 08 Sep 2011 21:42:06 +0530
From:	Santosh <santosh.shilimkar@...com>
To:	Kevin Hilman <khilman@...com>
CC:	rjw@...k.pl, linux@....linux.org.uk, linux-kernel@...r.kernel.org,
	linux-arm-kernel@...ts.infradead.org, ccross@...roid.com
Subject: Re: [PATCH v2 2/5] cpu_pm: call notifiers during suspend

On Thursday 08 September 2011 07:31 PM, Kevin Hilman wrote:
> Santosh<santosh.shilimkar@...com>  writes:
>
>> On Thursday 08 September 2011 01:32 AM, Kevin Hilman wrote:
>>> Santosh Shilimkar<santosh.shilimkar@...com>   writes:
>>>
>>>> From: Colin Cross<ccross@...roid.com>
>>>>
>>>> Implements syscore_ops in cpu_pm to call the cpu and
>>>> cpu cluster notifiers during suspend and resume,
>>>> allowing drivers receiving the notifications to
>>>> avoid implementing syscore_ops.
>>>>
>>>> Signed-off-by: Colin Cross<ccross@...roid.com>
>>>> [santosh.shilimkar@...com: Rebased against 3.1-rc4]
>>>> Signed-off-by: Santosh Shilimkar<santosh.shilimkar@...com>
>>>
>>> I don't think using syscore_ops is right here.  The platform code should
>>> decide where in its own suspend path the notifiers should be triggered.
>>>
>>> The reason is because while the syscore_ops run late in the suspend
>>> path, they still run before some platform-specific decisions about the
>>> low-power states are made.  That means that any notifiers that need to
>>> use information about the target low-power state (e.g. whether context
>>> will be lost or not) cannot do so since that information has not yet
>>> been decided until the platform_suspend_ops->enter() runs.
>>>
>> Initially I thought the same but in general S2R, platform doesn't
>> support multiple states like CPUIDLE. On OMAP, we do have a debug
>> option to choose the state but on real product, it's always the
>> deepest supported state is used. So the driver saving the
>> full context for S2R, should be fine.
>>
>> Ofcourse for CPUIDLE, the notifier call chain decisions are left
>> with platform CPUIDLE drivers since there can be multiple low
>> power states and the context save/restore has to be done based
>> on low power states.
>>
>> The advantage with this is, the platform code is clean from the
>> notfiers calls. CPUIDLE driver needs to call the different notifier
>> events based on C-states and that perfectly works.
>>
>> I liked this simplification for the S2R. Down side is in S2R if you
>> don't plan to hit deepest state, drivers end up saving full context
>> which is fine I guess.
>
> That's not the downside I'm worried about.
>
> If you have a driver that has a notifier, presumably it has something it
> wants to do to prepare for suspend *and* for idle, and you'd only want a
> single notifier callback in the driver to be used for both.  That
> callback would look something like:
>
>     start_preparing_for_suspend();
>
>     if (next_state == OFF)
>        save_context();
>
>     finish_preparing_for_suspend();
>
>
> The problem with the current cpu_*_pm_enter() calls in syscore_ops is
> that they happen before the next states are programmed, so during
> suspend the 'if (next_state == off)' above would never be true, but
> during idle it might be.
>
Point taken and now I agree with you.

I am going to drop this patch unless and until somebody shouts at me.
What that means is platform code need to take care of suspend case
as well which is trivial change.

Thanks for bringing up the point.

Regards
Santosh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ