[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aYOtfCoL2vfvFprQ@slm.duckdns.org>
Date: Wed, 4 Feb 2026 10:35:08 -1000
From: Tejun Heo <tj@...nel.org>
To: Qiliang Yuan <realwujing@...il.com>
Cc: Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Andrea Righi <arighi@...dia.com>,
Emil Tsalapatis <emil@...alapatis.com>,
Dan Schatzberg <schatzberg.dan@...il.com>,
Jake Hillion <jake@...lion.co.uk>, zhidao su <suzhidao@...omi.com>,
David Dai <david.dai@...ux.dev>,
Qiliang Yuan <yuanql9@...natelecom.cn>,
David Vernet <void@...ifault.com>,
Changwoo Min <changwoo@...lia.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>,
Douglas Anderson <dianders@...omium.org>,
Ryan Newton <newton@...a.com>, sched-ext@...ts.linux.dev,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] sched/ext: Add cpumask to skip unsuitable dispatch
queues
On Wed, Feb 04, 2026 at 04:34:18AM -0500, Qiliang Yuan wrote:
> Add a cpus_allowed cpumask to struct scx_dispatch_q to track the union
> of affinity masks for all tasks enqueued in a user-defined DSQ. This
> allows a CPU to quickly skip DSQs that contain no tasks runnable on the
> current CPU, avoiding wasteful O(N) scans.
>
> - Allocate/free cpus_allowed only for user-defined DSQs.
> - Use free_dsq_rcu_callback to safely free the DSQ and its nested mask.
> - Update the mask in dispatch_enqueue() using cpumask_copy() for the
> first task and cpumask_or() for subsequent ones. Skip updates if the
> mask is already full.
> - Update the DSQ mask in set_cpus_allowed_scx() when a task's affinity
> changes while enqueued.
> - Handle allocation failures in scx_create_dsq() to prevent memory leaks.
>
> This optimization improves performance with many DSQs and tight affinity
> constraints. The bitwise overhead is significantly lower than potential
> cache misses during task iteration.
>
> Signed-off-by: Qiliang Yuan <yuanql9@...natelecom.cn>
> Signed-off-by: Qiliang Yuan <realwujing@...il.com>
As Emil pointed out earlier, this adds overhead in general path which scales
with the number of CPUs and the benefit isn't that generic. Similar
optimizations can be done from BPF side and throwing a lot of tasks with
varying affinity restrictions into a single queue frequently scanned by
multiple CPUs is not scalable to begin with.
Thanks.
--
tejun
Powered by blists - more mailing lists