[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4942856d-ad2f-4922-aad9-20a902dae41b@intel.com>
Date: Tue, 8 Jul 2025 15:54:19 +0800
From: "Chen, Yu C" <yu.c.chen@...el.com>
To: Libo Chen <libo.chen@...cle.com>
CC: Juri Lelli <juri.lelli@...hat.com>, Dietmar Eggemann
<dietmar.eggemann@....com>, Steven Rostedt <rostedt@...dmis.org>, Ben Segall
<bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>, Valentin Schneider
<vschneid@...hat.com>, Tim Chen <tim.c.chen@...el.com>, Vincent Guittot
<vincent.guittot@...aro.org>, Abel Wu <wuyun.abel@...edance.com>, "Madadi
Vineeth Reddy" <vineethr@...ux.ibm.com>, Hillf Danton <hdanton@...a.com>,
"Len Brown" <len.brown@...el.com>, <linux-kernel@...r.kernel.org>, Tim Chen
<tim.c.chen@...ux.intel.com>, Ingo Molnar <mingo@...hat.com>, K Prateek Nayak
<kprateek.nayak@....com>, Peter Zijlstra <peterz@...radead.org>, "Gautham R .
Shenoy" <gautham.shenoy@....com>
Subject: Re: [RFC patch v3 02/20] sched: Several fixes for cache aware
scheduling
On 7/8/2025 9:15 AM, Libo Chen wrote:
> Hi Chenyu
>
> On 6/18/25 11:27, Tim Chen wrote:
>> From: Chen Yu <yu.c.chen@...el.com>
>>
>> 1. Fix compile error on percpu allocation.
>> 2. Enqueue to the target CPU rather than the current CPU.
>> 3. NULL LLC sched domain check(Libo Chen).
>
> Can I suggest we completely disable cache-aware scheduling
> for systems without any LLC in the next version? No more added
> fields, function code for them. This info should be easily
> determinable during bootup while building up the topology,
> and cannot be modified during runtime. Sometimes it's not
> possible for distros to disable it in kconfig just for one
> particular CPU, and SCHED_CACHE_LB isn't enough for removing
> the added fields and users can turn it back on anyway.
>
Good point, my understanding is that we should introduce
a static key similar to sched_smt_present to get rid of the
cache-aware scheduling code path if either LLC is not present
or there is only 1 LLC within the Node.
Thanks,
Chenyu
> Thanks,
> Libo
>
>
Powered by blists - more mailing lists