[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7186da1f-4d16-48f5-bdc0-cb04942b3a5e@linaro.org>
Date: Mon, 14 Jul 2025 11:35:56 +0100
From: Tudor Ambarus <tudor.ambarus@...aro.org>
To: "Rafael J. Wysocki" <rafael@...nel.org>
Cc: Linux PM <linux-pm@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>,
Alan Stern <stern@...land.harvard.edu>, Ulf Hansson
<ulf.hansson@...aro.org>, Johan Hovold <johan@...nel.org>,
Jon Hunter <jonathanh@...dia.com>, Saravana Kannan <saravanak@...gle.com>,
William McVicker <willmcvicker@...gle.com>,
Peter Griffin <peter.griffin@...aro.org>,
André Draszik <andre.draszik@...aro.org>
Subject: Re: [PATCH v3 1/5] PM: sleep: Resume children after resuming the
parent
On 7/14/25 8:29 AM, Rafael J. Wysocki wrote:
>> diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
>> index d9d4fc58bc5a..0e186bc38a00 100644
>> --- a/drivers/base/power/main.c
>> +++ b/drivers/base/power/main.c
>> @@ -1281,6 +1281,27 @@ static void dpm_async_suspend_parent(struct device *dev, async_func_t func)
>> dpm_async_with_cleanup(dev->parent, func);
>> }
>>
>> +static void dpm_async_suspend_complete_all(struct list_head *device_list)
>> +{
>> + struct device *dev;
>> +
>> +
>> + pr_err("tudor: %s: enter\n", __func__);
>> + guard(mutex)(&async_wip_mtx);
>> +
>> + list_for_each_entry_reverse(dev, device_list, power.entry) {
>> + /*
>> + * In case the device is being waited for and async processing
>> + * has not started for it yet, let the waiters make progress.
>> + */
>> + pr_err("tudor: %s: in device list\n", __func__);
>> + if (!dev->power.work_in_progress) {
>> + pr_err("tudor: %s: call complete_all\n", __func__);
>> + complete_all(&dev->power.completion);
>> + }
>> + }
>> +}
>> +
>> /**
>> * resume_event - Return a "resume" message for given "suspend" sleep state.
>> * @sleep_state: PM message representing a sleep state.
>> @@ -1459,6 +1480,7 @@ static int dpm_noirq_suspend_devices(pm_message_t state)
>> mutex_lock(&dpm_list_mtx);
>>
>> if (error || async_error) {
>> + dpm_async_suspend_complete_all(&dpm_late_early_list);
>> /*
>> * Move all devices to the target list to resume them
>> * properly.
>> @@ -1663,6 +1685,7 @@ int dpm_suspend_late(pm_message_t state)
>> mutex_lock(&dpm_list_mtx);
>>
>> if (error || async_error) {
>> + dpm_async_suspend_complete_all(&dpm_late_early_list);
>> /*
>> * Move all devices to the target list to resume them
>> * properly.
>> @@ -1959,6 +1982,7 @@ int dpm_suspend(pm_message_t state)
>> mutex_lock(&dpm_list_mtx);
>>
>> if (error || async_error) {
>> + dpm_async_suspend_complete_all(&dpm_late_early_list);
> -> There is a bug here which is not present in the patch I've sent.
My bad, I edited by hand, sorry.
>
> It should be
>
> dpm_async_suspend_complete_all(&dpm_prepared_list);
Wonderful, it seems this makes suspend happy on downstream pixel6!
I'm running some more tests and get back to you in a few hours.
>
> It is also there in dpm_noirq_suspend_devices() above, but it probably
> doesn't matter.
>
>> /*
>> * Move all devices to the target list to resume them
>> * properly.
>> @@ -1970,9 +1994,12 @@ int dpm_suspend(pm_message_t state)
>>
>> mutex_unlock(&dpm_list_mtx);
>>
>> + pr_err("tudor: %s: before async_synchronize_full\n", __func__);
>> async_synchronize_full();
>> if (!error)
>> error = async_error;
>> + pr_err("tudor: %s: after async_synchronize_full();\n", __func__);
>> +
>>
>> if (error)
>> dpm_save_failed_step(SUSPEND_SUSPEND);
Powered by blists - more mailing lists