[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c758497c-b008-4fea-a4a3-fb8769ce8b2a@oss.qualcomm.com>
Date: Thu, 10 Jul 2025 20:51:41 +0800
From: "Aiqun(Maria) Yu" <aiqun.yu@....qualcomm.com>
To: Georgi Djakov <djakov@...nel.org>, Mike Tipton <quic_mdtipton@...cinc.com>
Cc: linux-pm@...r.kernel.org, linux-kernel@...r.kernel.org,
quic_okukatla@...cinc.com
Subject: Re: [PATCH v2] interconnect: Use rt_mutex for icc_bw_lock
On 5/16/2025 11:50 PM, Georgi Djakov wrote:
> Hi Mike,
...
>> result in frame drops and visual glitches.
>
> Ok, so the issue we see is caused by lock contention, as we have many
> clients and some of them try to do very aggressive scaling.
>
>> To prevent this priority inversion, switch to using rt_mutex for
>> icc_bw_lock. This isn't needed for icc_lock since that's not used in the
>> critical, latency-sensitive voting paths.
>
> If the issue does not occur anymore with this patch, then this is a good
> sign, but we still need to get some numbers and put them in the commit
> message. The RT mutexes add some overhead and complexity that could
I have some preliminary latency numbers for the icc_lock mutex lock on
my Android phone under normal conditions, ranging from 50 to 1000
nanoseconds. I observed that three normal priority tasks and one
real-time (RT) task are contending for the icc_lock. The latency numbers
are not differentiated between RT and normal tasks, but the 1000ns
latency was observed on the RT task.
The latency numbers can vary significantly depending on the scenario.
Please feel free to suggest any specific testing scenarios to capture
the numbers you are interested in.
The delay numbers will be based on the scheduler's granular time. For
instance, with a 250Hz scheduler tick, single cpu case, the delay is
likely to be around 4ms granular per sched_tick and the other system
tasks's vruntime conditions. Since both real-time (RT) tasks and normal
tasks may compete for this particular mutex lock, it is advisable to use
an rt_mutex to enhance real-time performance.
Here is the potential flow for better understanding:
+--------------+ +-----------------+
| RT Task A | |Normal cfs task B|
+--------------+ +-----------------+
mutex_lock(&icc_lock)
Runnable because of other high prio normal
tasks
4ms sched_tick to check chance to run
call icc_set_bw()
mutex_lock(&icc_lock)
Get the chance to run
-->mutex_unlock(&icc_lock)
-->deboost task_B prio
get the lock
> increase latency for both uncontended and contended paths. I am curious
Yes, there will be some overhead. However, if we use an RT thread to
speed up the icc_lock mutex, and if the clock settings can benefit the
entire system, it could be advantageous. For example, increasing the
clock speed could lead to an overall performance boost. In theory, this
approach is worth considering.
> if there is any regression for the non-priority scenarios. Also if there
> are many threads, the mutex cost itself could become a bottleneck.
>
>>
>> Signed-off-by: Mike Tipton <quic_mdtipton@...cinc.com>
--
Thx and BRs,
Aiqun(Maria) Yu
Powered by blists - more mailing lists