lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <560a7dab-1f3e-4bda-bec3-3ed1723501cb@oss.qualcomm.com>
Date: Wed, 3 Dec 2025 10:20:41 +0800
From: Yin Li <yin.li@....qualcomm.com>
To: Georgi Djakov <djakov@...nel.org>, Mike Tipton <quic_mdtipton@...cinc.com>
Cc: linux-pm@...r.kernel.org, linux-kernel@...r.kernel.org,
        quic_aiquny@...cinc.com, quic_okukatla@...cinc.com
Subject: Re: [PATCH v2] interconnect: Use rt_mutex for icc_bw_lock


On 5/16/2025 11:50 PM, Georgi Djakov wrote:
> Hi Mike,
> 
> On 6.05.25 17:51, Mike Tipton wrote:
>> The icc_set_bw() function is often used in latency sensitive paths to
>> scale BW on a per-frame basis by high priority clients such as GPU and
>> display. However, there are many low priority clients of icc_set_bw() as
>> well. This can lead to priority inversion and unacceptable delays for
>> the high priority clients. Which in the case of GPU and display can
>> result in frame drops and visual glitches.
> 
> Ok, so the issue we see is caused by lock contention, as we have many
> clients and some of them try to do very aggressive scaling.
> 
>> To prevent this priority inversion, switch to using rt_mutex for
>> icc_bw_lock. This isn't needed for icc_lock since that's not used in the
>> critical, latency-sensitive voting paths.
> 
> If the issue does not occur anymore with this patch, then this is a good
> sign, but we still need to get some numbers and put them in the commit
> message. The RT mutexes add some overhead and complexity that could
> increase latency for both uncontended and contended paths. I am curious
> if there is any regression for the non-priority scenarios. Also if there
> are many threads, the mutex cost itself could become a bottleneck.

Hi Georgi,

We constructed a priority inversion test scenario, which included 
multiple real-time threads with different priorities and CFS threads 
with different nice values ​​competing for a mutex, to verify the 
overhead of the RT thread acquiring the lock mutex.

The maximum, minimum, and average of overhead were determined through 
100 iterations of testing.

Then replace the mutex with an rt-mutex and perform the same test, 
obtaining the overhead's max, min, and average through 100 loops. 
Calculate the change in average.

Finally we can draw the conclusion:
1) In a scenario where the overhead of threads competing for a mutex is 
set to 5ms, using a mutex will result in an average overhead of 
4127687ns for the tested rt threads to acquire the mutex.

2) After replacing the mutex with rt-mutex, the latency can be reduced 
to 2010555ns, which greatly improves the mutex overhead brought by 
priority inversion and reduces latency by about 50%.

3) Furthermore, to align with the user's given overhead of 40ms, the 
test case was modified to have a competing mutex thread overhead of
40ms, and the experiment was repeated, yielding similar results.


After testing, the overhead of a single rt-mutex is approximately 937ns, 
and the overhead of a single mutex is approximately 520ns. The overhead 
of a single rt-mutex does indeed lead to more latency.

However, in scenarios where multiple clients frequently access the 
interconnect API, the latency of using mutexes far outweighs the 
overhead added by rt-mutexes themselves.

Compared to the performance improvement of rt-mutex in a 
thread-contention environment, the latency itself is perfectly acceptable.


>>
>> Signed-off-by: Mike Tipton <quic_mdtipton@...cinc.com>
>> ---
>>
>> Since the original patch was posted a couple years ago, we've continued
>> to hit this for display and now for GPU as well. How frequently depends
>> heavily on the specific chip, product, and use case. Different
>> configurations hit it easier than others. But for both cases it results
>> in obvious visual glitches.
>>
>> The paths being voted for (primarily DDR) are fundamentally shared
>> between clients of all types and priority levels. We can't control their
>> priorities, so aside from having those priorities inherited we're always
>> subject to these sorts of inversions.
>>
>> The motivation isn't really for general performance improvement, but
>> instead to fix the rare cases of visual glitches and artifacts.
>>
>> A similar patch was posted last year [1] to address similar problems.
>>
>> [1] https://lore.kernel.org/all/20240220074300.10805-1- 
>> wangrumeng@...omi.corp-partner.google.com/
>>
>> Changes in v2:
>> - Rebase onto linux-next.
>> - Select RT_MUTEXES in Kconfig.
>> - Only use rt_mutex for icc_bw_lock since now there are separate locks
>>    and icc_lock isn't in the critical path.
>> - Reword commit text.
>> - Link to v1: https://lore.kernel.org/all/20220906191423.30109-1- 
>> quic_mdtipton@...cinc.com/
>>
>>   drivers/interconnect/Kconfig |  1 +
>>   drivers/interconnect/core.c  | 23 ++++++++++++-----------
>>   2 files changed, 13 insertions(+), 11 deletions(-)
>>
>> diff --git a/drivers/interconnect/Kconfig b/drivers/interconnect/Kconfig
>> index f2e49bd97d31..f6fd5f2d7d40 100644
>> --- a/drivers/interconnect/Kconfig
>> +++ b/drivers/interconnect/Kconfig
>> @@ -1,6 +1,7 @@
>>   # SPDX-License-Identifier: GPL-2.0-only
>>   menuconfig INTERCONNECT
>>       bool "On-Chip Interconnect management support"
>> +    select RT_MUTEXES
> 
> This pulls in unconditionally all the RT-mutex stuff, which some people
> might not want (although today it's also selected by the I2C subsystem
> for example). I am wondering if we should make it configurable with the
> normal mutex being the default or just follow the i2c example... but
> maybe we can decide this when we have some numbers.
> 

Making locks configurable is not a common practice. We do not intend to 
make changes in this patch.



-- 
Thx and BRs,
Yin


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ