[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z3-hBiGVJT-R4iuZ@vaxr-BM6660-BM6360>
Date: Thu, 9 Jan 2025 18:12:22 +0800
From: I Hsin Cheng <richard120310@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...hat.com, juri.lelli@...hat.com, vincent.guittot@...aro.org,
dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, vschneid@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] sched/fair: Refactor can_migrate_task() to elimate
looping
On Thu, Jan 09, 2025 at 10:46:27AM +0100, Peter Zijlstra wrote:
> On Thu, Jan 09, 2025 at 01:29:47AM +0800, I Hsin Cheng wrote:
> > kernel/sched/fair.c | 16 ++++++++++------
> > 1 file changed, 10 insertions(+), 6 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 2d16c8545..ce46f61da 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -9404,12 +9404,16 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
> > return 0;
> >
> > /* Prevent to re-select dst_cpu via env's CPUs: */
> > - for_each_cpu_and(cpu, env->dst_grpmask, env->cpus) {
> > - if (cpumask_test_cpu(cpu, p->cpus_ptr)) {
> > - env->flags |= LBF_DST_PINNED;
> > - env->new_dst_cpu = cpu;
> > - break;
> > - }
> > + struct cpumask dst_mask;
>
> Except you cannot put cpumask on-stack...
>
> > +
> > + cpumask_and(&dst_mask, env->dst_grpmask, env->cpus);
> > + cpumask_and(&dst_mask, &dst_mask, p->cpus_ptr);
> > +
> > + cpu = cpumask_first(&dst_mask);
> > +
> > + if (cpu < nr_cpu_ids) {
> > + env->flags |= LBF_DST_PINNED;
> > + env->new_dst_cpu = cpu;
> > }
> >
> > return 0;
> > --
> > 2.43.0
> >
> > Except you cannot put cpumask on-stack...
Oh I'm sorry, may I ask the reason? is it because cpumask tends to be
very large?
I assume we're not supposed to change the value of "env->dst_grpmask" or
"env->cpus" here, so in order to achieve something like this we need
another cpumask, explicitly allocate a cpumask on heap for this section
would be an overkill I think ?
Or maybe we can have kernel maintain a global cpumask so anyone wishing
to perform some operations will get to do it without maintain allocate a
cpumask themselves?
Powered by blists - more mailing lists