[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a1ff6b87-48a9-436a-9b62-8664d5207884@amd.com>
Date: Tue, 23 Sep 2025 14:25:25 +0530
From: K Prateek Nayak <kprateek.nayak@....com>
To: Fernand Sieber <sieberf@...zon.com>, <mingo@...hat.com>,
<peterz@...radead.org>
CC: <linux-kernel@...r.kernel.org>, <juri.lelli@...hat.com>,
<vincent.guittot@...aro.org>, <dietmar.eggemann@....com>,
<rostedt@...dmis.org>, <bsegall@...gle.com>, <mgorman@...e.de>,
<bristot@...hat.com>, <vschneid@...hat.com>, <dwmw@...zon.co.uk>,
<jschoenh@...zon.de>, <liuyuxua@...zon.com>
Subject: Re: [PATCH 4/4] sched/fair: Add more core cookie check in wake up
fast path
Hello Fernand,
On 9/22/2025 6:09 PM, Fernand Sieber wrote:
> The fast path in select_idle_sibling() can place tasks on CPUs without
> considering core scheduling constraints, potentially causing immediate
> force idle when the sibling runs an incompatible task.
>
> Add cookie compatibility checks before selecting a CPU in the fast path.
> This prevents placing waking tasks on CPUs where the sibling is running
> an incompatible task, reducing force idle occurrences.
>
> Signed-off-by: Fernand Sieber <sieberf@...zon.com>
> ---
> kernel/sched/fair.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 78b36225a039..a9cbb0e9bb43 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7578,7 +7578,7 @@ static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int t
> */
> if (!cpumask_test_cpu(cpu, sched_domain_span(sd)))
> continue;
> - if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
> + if (__select_idle_cpu(cpu, p) != -1)
So with Patch 1, you already check for cookie matching while entering
select_idle_smt() and now, each pass of the loop again does a
sched_core_cookie_match() which internally loops through the smt mask
again! Seems wasteful.
> return cpu;
> }
>
> @@ -7771,7 +7771,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
> */
> lockdep_assert_irqs_disabled();
>
> - if ((available_idle_cpu(target) || sched_idle_cpu(target)) &&
> + if ((__select_idle_cpu(target, p) != -1) &&
> asym_fits_cpu(task_util, util_min, util_max, target))
> return target;
>
> @@ -7779,7 +7779,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
> * If the previous CPU is cache affine and idle, don't be stupid:
> */
> if (prev != target && cpus_share_cache(prev, target) &&
> - (available_idle_cpu(prev) || sched_idle_cpu(prev)) &&
> + (__select_idle_cpu(prev, p) != -1) &&
> asym_fits_cpu(task_util, util_min, util_max, prev)) {
>
> if (!static_branch_unlikely(&sched_cluster_active) ||
> @@ -7811,7 +7811,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
> if (recent_used_cpu != prev &&
> recent_used_cpu != target &&
> cpus_share_cache(recent_used_cpu, target) &&
> - (available_idle_cpu(recent_used_cpu) || sched_idle_cpu(recent_used_cpu)) &&
> + (__select_idle_cpu(recent_used_cpu, p) != -1) &&
On an SMT-8 system, all the looping over smt mask per wakeup will add
up. Is that not a concern? A single task with core cookie enabled will
add massive overhead for all wakeup in the system.
> cpumask_test_cpu(recent_used_cpu, p->cpus_ptr) &&
> asym_fits_cpu(task_util, util_min, util_max, recent_used_cpu)) {
>
--
Thanks and Regards,
Prateek
Powered by blists - more mailing lists