[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f62a2a31-f79a-4715-b9ce-b6a647d6305f@amd.com>
Date: Thu, 2 Jan 2025 11:19:08 +0530
From: K Prateek Nayak <kprateek.nayak@....com>
To: Chuyi Zhou <zhouchuyi@...edance.com>, <mingo@...hat.com>,
<peterz@...radead.org>, <juri.lelli@...hat.com>,
<vincent.guittot@...aro.org>, <dietmar.eggemann@....com>,
<rostedt@...dmis.org>, <bsegall@...gle.com>, <mgorman@...e.de>,
<vschneid@...hat.com>
CC: <chengming.zhou@...ux.dev>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 3/3] sched/fair: Ensure select housekeeping cpus in
task_numa_find_cpu
Hello Chuyi,
On 12/27/2024 1:29 PM, Chuyi Zhou wrote:
> [..snip..]
>>
>> I think the for_each_cpu_wrap() was used to reduce contention for xchg
>> operation below. Perhaps we can have a per-cpu temporary mask (like
>> load_balance_mask) if we want to reduce the xchg contention and break
>> this into cpumask_and() + for_each_cpu_wrap() steps. I'm not sure if
>> any of the existing masks (load_balance_mask, select_rq_mask,
>> should_we_balance_tmpmask) can be safely reused. Otherwise, perhaps we
>> can make a case for for_each_cpu_and_wrap() with this use case.
>>
>
>
> for_each_cpu_and_wrap() is a good idea, but it might be slightly off-topic for this subject. Perhaps we should stick with this implementation for now and see what others think about v2.
Sure thing! No strong feeling from my side :)
>
>
> Thanks.
>
--
Thanks and Regards,
Prateek
Powered by blists - more mailing lists