[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z7IYOHDLVUTiYuI5@slm.duckdns.org>
Date: Sun, 16 Feb 2025 06:54:16 -1000
From: Tejun Heo <tj@...nel.org>
To: Andrea Righi <arighi@...dia.com>
Cc: David Vernet <void@...ifault.com>, Changwoo Min <changwoo@...lia.com>,
Yury Norov <yury.norov@...il.com>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>,
Joel Fernandes <joel@...lfernandes.org>, Ian May <ianm@...dia.com>,
bpf@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCHSET v12 sched_ext/for-6.15] sched_ext: split global idle
cpumask into per-NUMA cpumasks
On Fri, Feb 14, 2025 at 08:39:59PM +0100, Andrea Righi wrote:
> = Overview =
>
> As discussed during the sched_ext office hours, using a global cpumask to
> keep track of the idle CPUs can be inefficient and it may not scale really
> well on large NUMA systems.
>
> Therefore, split the idle cpumask into multiple per-NUMA node cpumasks to
> improve scalability and performance on such large systems.
>
> Scalability issues seem to be more noticeable on Intel Sapphire Rapids
> dual-socket architectures.
Applied 1-7 to sched_ext/for-6.15.
Thanks.
--
tejun
Powered by blists - more mailing lists