[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <03b4f58b-6b8d-4138-8a18-c41ec179e3b0@amd.com>
Date: Thu, 25 Sep 2025 07:41:41 +0530
From: K Prateek Nayak <kprateek.nayak@....com>
To: Shrikanth Hegde <sshegde@...ux.ibm.com>, Peter Zijlstra
<peterz@...radead.org>
CC: Dietmar Eggemann <dietmar.eggemann@....com>, Steven Rostedt
<rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>, Mel Gorman
<mgorman@...e.de>, Valentin Schneider <vschneid@...hat.com>, "Gautham R.
Shenoy" <gautham.shenoy@....com>, Swapnil Sapkal <swapnil.sapkal@....com>,
Ingo Molnar <mingo@...hat.com>, Juri Lelli <juri.lelli@...hat.com>, "Vincent
Guittot" <vincent.guittot@...aro.org>, Anna-Maria Behnsen
<anna-maria@...utronix.de>, Frederic Weisbecker <frederic@...nel.org>, Thomas
Gleixner <tglx@...utronix.de>, <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 01/19] sched/fair: Simplify set_cpu_sd_state_*() with
guards
Hello Shrikanth,
Thank you for the review.
On 9/25/2025 1:56 AM, Shrikanth Hegde wrote:
> we have both sd_llc->shared and sd_llc_shared usage spread
> across the code, is it possible to remove sd_llc_shared and use sd_llc->shared
> instead?
>
> Likely sd_llc is cache hot and access to shared should be fast too.
> In turn this could free up some per cpu area.
Ack! That can be done. Probably the motivation was to avoid an
additional dereference in the scheduler hot-path but apart from
{set,test}_idle_cores(), most other usage is either in the slow-path, or
we readily have access to "sd_llc" already so dereferencing
"sd_llc->shared" shouldn't be any more expensive.
/me goes and looks at usage of {set,test}_idle_cores()
So set_idle_cores(), I would consider, is still in the slow path since
the CPU is going idle and out of all calls to test_idle_cores(), only
numa_idle_core() doesn't access "sd_llc" prior to test_idle_core() but
given it is in a for_each_cpu() loop, I think we can make some
optimizations there to reduce the amount of dereferencing.
>
> Any thoughts?
I think it makes sense. I can send out a separate series with Patch 1,2
and this cleanup on its own discussion the optimization for the
"nohz.idle_cpus_mask" in a separate series.
--
Thanks and Regards,
Prateek
Powered by blists - more mailing lists