lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPDyKFoxbYtPTs+Egsn=2pJYdsw8g+yXfFjy-NAyq+X2ohyEhA@mail.gmail.com>
Date: Thu, 29 Jan 2026 12:04:53 +0100
From: Ulf Hansson <ulf.hansson@...aro.org>
To: Kevin Hilman <khilman@...libre.com>
Cc: "Rafael J. Wysocki" <rafael@...nel.org>, linux-pm@...r.kernel.org, 
	Dhruva Gole <d-gole@...com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/2] PM: QoS/pmdomains: support resume latencies for
 system-wide PM

On Thu, 29 Jan 2026 at 00:51, Kevin Hilman <khilman@...libre.com> wrote:
>
> Hi Ulf,
>
> Ulf Hansson <ulf.hansson@...aro.org> writes:
>
> > On Wed, 21 Jan 2026 at 02:54, Kevin Hilman (TI) <khilman@...libre.com> wrote:
> >>
> >> Currently QoS resume latencies are only considered for runtime PM
> >> transitions of pmdomains, which remains the default.
> >>
> >> In order to also support QoS resume latencies during system-wide PM,
> >> add a new flag to indicate a resume latency should be used for
> >> system-wide PM *instead of* runtime PM.
> >>
> >> For example, by doing this:
> >>
> >>    # echo 500000 > /sys/devices/.../<dev0>/power/pm_qos_resume_latency_us
> >>
> >> dev0 now has a resume latency of 500000 usec for runtime PM
> >> transitions.
> >>
> >> Then, if the new flag is also set:
> >>
> >>    # echo 1 > /sys/devices/.../<dev0>/power/pm_qos_latency_sys
> >>
> >> That 500000 usec delay now applies to system-wide PM (and not to
> >> runtime PM).
> >>
> >> If a user requires a different latency value for system-wide PM
> >> compared to runtime PM, then the runtime PM value can be set for
> >> normal operations, and the system-wide value (and flag) can be set by
> >> userspace before suspend, and the runtime PM value can be restored
> >> after resume.
> >
> > That's sounds complicated for user-space to manage - and causes churns
> > during every suspend/resume cycle. Why don't we just add a new latency
> > value instead, that applies both to runtime PM and system-wide PM,
> > similar and consistent to what we did for CPU QoS?
>
> First, I don't think it will be very common to have different *device*
> latency values between runtime PM and system PM, because the reasons for
> device-specific wakeup latency will likely be the same in both cases, at
> least for all the usecases I've thought about.  The only real distiction
> being whether the latency should be applied to runtime or system-wide
> PM, which the new flag provides.
>
> Second, this doesn't have to be in userspace at all, that's just the
> example I used to illustrate.  In fact, today not many latency
> constraints are exposed to userspace, so this can be acheived by the
> kernel API for setting latency values & flags, which I think is the more
> likely usecase anyways.  For example, for a driver that is managing a
> wakeup latency constraint, it could update it's own constraint and set
> the flag in it's ->prepare() and ->complete() hook if it needs separate
> values for system-wide vs. runtime PM.

Right, as long as the use cases can be managed by the kernel itself,
then this should be fine. So, I guess the question is, if we should
consider use-cases that requires user space involvement at this point?

Also note, patch1 do exposes a new QoS sysfs file, to allow user space
to manage the new QoS flag - so this becomes ABI.

>
> Third, adding a new QoS value for this involves a bunch of new code that
> is basically copy/paste of the current latency code.  That includes APIs
> for
>
>   - sysfs interface
>   - notifiers (add, remove)
>   - read/add/update value adds a new type
>   - expose value to userspace (becomes ABI)
>   - tolerance
>
> I actually went down this route first, and realized this would be lots
> of duplicated code for a usecase that we're not even sure exists, so I
> found the flag approach to be much more straight forward for the
> usecases at hand.

I understand your concern and I agree!

However, my main issue is the user space ABI part. Is the QoS flag,
that patch1 exposes, future proof enough when considering user cases
that needs to be managed by user space? In my opinion, I don't think
so.

Kind regards
Uffe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ