[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADHxFxSGwSBL0SvHGe6peVZ2=T=cz-PERrAiux8=0v_8heAp2w@mail.gmail.com>
Date: Mon, 19 May 2025 13:13:10 +0800
From: hupu <hupu.gm@...il.com>
To: John Stultz <jstultz@...gle.com>
Cc: linux-kernel@...r.kernel.org, juri.lelli@...hat.com, peterz@...radead.org,
vschneid@...hat.com, mingo@...hat.com, vincent.guittot@...aro.org,
dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, hupu@...nssion.com
Subject: Re: [RFC 1/1] sched: Remove unreliable wake_cpu check in proxy_needs_return
Hi John,
I’d like to revisit the discussion about this patch.
As mentioned in the previous email, while using wake_cpu directly can
effectively shortcut the deactivate & wakeup logic, it may introduce
performance issues, especially on big.LITTLE architectures. Let me
illustrate this with a common scenario on Android: foreground critical
threads often need to wait for background threads to release a mutex
lock. Under the "Proxy Execution" mechanism, the foreground thread is
migrated to the CPU where the background thread is running. However,
background threads are typically bound to weaker CPUs due to being
part of the background group. If proxy_needs_return allows the
foreground thread to be placed on wake_cpu, it may result in the
foreground thread being migrated to a little core, causing performance
bottlenecks.
Therefore, I suggest that proxy_needs_return() should always return
false for donor tasks unless the task is already running on a CPU.
This ensures that donor tasks trigger a CPU re-selection process,
which is consistent with the behavior prior to the introduction of
"Proxy Execution" and should not introduce additional overhead.
Furthermore, on Android platforms, this behavior allows vendors to
leverage the hook in select_task_rq() to fine-tune CPU selection logic
for critical threads, enabling better optimization for specific
scenarios.
Additionally, this patch has been successfully validated on an ARM64
platform emulated via QEMU. It has been running for several days
without issues, demonstrating its stability and reliability.
Looking forward to your response and further discussion!
hupu
Powered by blists - more mailing lists