[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZnxXej8h46lmzrAP@slm.duckdns.org>
Date: Wed, 26 Jun 2024 08:01:30 -1000
From: Tejun Heo <tj@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: torvalds@...ux-foundation.org, mingo@...hat.com, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, vschneid@...hat.com, ast@...nel.org,
daniel@...earbox.net, andrii@...nel.org, martin.lau@...nel.org,
joshdon@...gle.com, brho@...gle.com, pjt@...gle.com,
derkling@...gle.com, haoluo@...gle.com, dvernet@...a.com,
dschatzberg@...a.com, dskarlat@...cmu.edu, riel@...riel.com,
changwoo@...lia.com, himadrics@...ia.fr, memxor@...il.com,
andrea.righi@...onical.com, joel@...lfernandes.org,
linux-kernel@...r.kernel.org, bpf@...r.kernel.org,
kernel-team@...a.com
Subject: Re: [PATCH 09/39] sched: Add @reason to
sched_class->rq_{on|off}line()
Hello,
On Wed, Jun 26, 2024 at 10:23:42AM +0200, Peter Zijlstra wrote:
...
> - cpuset
> - cpuset-v2
> - isolcpus boot crap
>
> And they're all subtly different iirc, but IIRC the cpuset ones are
> simplest since the task is part of a cgroup and the cgroup cpumask is
> imposed on them and things should be fairly straight forward.
>
> The isolcpus thing creates a pile of single CPU partitions and people
> have to manually set cpu-affinity, and here we have some hysterical
> behaviour that I would love to change but have not yet dared do --
> because I know there's people doing dodgy things because they've been
> sending 'bug' reports.
>
> Specifically it is possible to set a cpumask that spans multiple
> partitions :-( Traditionally the behaviour was that it would place the
> task on the lowest cpu number, the current behaviour is the task it
> placed randomly on any CPU in the given mask.
This is what I was missing. I was just thinking cpuset case and as cpuset
partitions are always reflected in the task cpumasks, there isn't whole lot
to do.
...
> > While it would
> > make sense to communicate partitions to the BPF scheduler, would it make
> > sense to reject BPF scheduler based on it? ie. Assuming that the feature is
> > implemented, what would distinguish between one BPF scheduler which handles
> > partitions specially and the other which doesn't care?
>
> Correctness? Anyway, can't you handle this in the kernel part, simply
> never allow a shared runqueue to cross a root_domain's mask and put some
> WARNs on to ensure constraints are respected etc.? Should be fairly
> simple to check prev_cpu and new_cpu are having the same root_domain for
> instance.
Yeah, I'll plug it. It might as well be just reject and ejecting BPF
schedulers when conditions are detected. The BPF scheduler doesn't have to
use the built-in DSQs and can decide to dispatch to any CPU from its BPF
queues (however that may be implemented, it can also be in userspace), so
it's a bit tricky to enforce correctness dynamically after the fact. I'll
think more on it.
Thanks.
--
tejun
Powered by blists - more mailing lists