[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <7hh5s547ot.fsf@baylibre.com>
Date: Wed, 28 Jan 2026 15:51:46 -0800
From: Kevin Hilman <khilman@...libre.com>
To: Ulf Hansson <ulf.hansson@...aro.org>
Cc: "Rafael J. Wysocki" <rafael@...nel.org>, linux-pm@...r.kernel.org,
Dhruva Gole <d-gole@...com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/2] PM: QoS/pmdomains: support resume latencies for
system-wide PM
Hi Ulf,
Ulf Hansson <ulf.hansson@...aro.org> writes:
> On Wed, 21 Jan 2026 at 02:54, Kevin Hilman (TI) <khilman@...libre.com> wrote:
>>
>> Currently QoS resume latencies are only considered for runtime PM
>> transitions of pmdomains, which remains the default.
>>
>> In order to also support QoS resume latencies during system-wide PM,
>> add a new flag to indicate a resume latency should be used for
>> system-wide PM *instead of* runtime PM.
>>
>> For example, by doing this:
>>
>> # echo 500000 > /sys/devices/.../<dev0>/power/pm_qos_resume_latency_us
>>
>> dev0 now has a resume latency of 500000 usec for runtime PM
>> transitions.
>>
>> Then, if the new flag is also set:
>>
>> # echo 1 > /sys/devices/.../<dev0>/power/pm_qos_latency_sys
>>
>> That 500000 usec delay now applies to system-wide PM (and not to
>> runtime PM).
>>
>> If a user requires a different latency value for system-wide PM
>> compared to runtime PM, then the runtime PM value can be set for
>> normal operations, and the system-wide value (and flag) can be set by
>> userspace before suspend, and the runtime PM value can be restored
>> after resume.
>
> That's sounds complicated for user-space to manage - and causes churns
> during every suspend/resume cycle. Why don't we just add a new latency
> value instead, that applies both to runtime PM and system-wide PM,
> similar and consistent to what we did for CPU QoS?
First, I don't think it will be very common to have different *device*
latency values between runtime PM and system PM, because the reasons for
device-specific wakeup latency will likely be the same in both cases, at
least for all the usecases I've thought about. The only real distiction
being whether the latency should be applied to runtime or system-wide
PM, which the new flag provides.
Second, this doesn't have to be in userspace at all, that's just the
example I used to illustrate. In fact, today not many latency
constraints are exposed to userspace, so this can be acheived by the
kernel API for setting latency values & flags, which I think is the more
likely usecase anyways. For example, for a driver that is managing a
wakeup latency constraint, it could update it's own constraint and set
the flag in it's ->prepare() and ->complete() hook if it needs separate
values for system-wide vs. runtime PM.
Third, adding a new QoS value for this involves a bunch of new code that
is basically copy/paste of the current latency code. That includes APIs
for
- sysfs interface
- notifiers (add, remove)
- read/add/update value adds a new type
- expose value to userspace (becomes ABI)
- tolerance
I actually went down this route first, and realized this would be lots
of duplicated code for a usecase that we're not even sure exists, so I
found the flag approach to be much more straight forward for the
usecases at hand.
Kevin
Powered by blists - more mailing lists