[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231012080546.GI6307@noisy.programming.kicks-ass.net>
Date: Thu, 12 Oct 2023 10:05:46 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Josh Don <joshdon@...gle.com>
Cc: Ankit Jain <ankitja@...are.com>, yury.norov@...il.com,
andriy.shevchenko@...ux.intel.com, linux@...musvillemoes.dk,
qyousef@...alina.io, pjt@...gle.com, bristot@...hat.com,
vschneid@...hat.com, linux-kernel@...r.kernel.org,
namit@...are.com, amakhalov@...are.com, srinidhir@...are.com,
vsirnapalli@...are.com, vbrahmajosyula@...are.com,
akaher@...are.com, srivatsa@...il.mit.edu
Subject: Re: [PATCH RFC] cpumask: Randomly distribute the tasks within
affinity mask
On Wed, Oct 11, 2023 at 04:55:35PM -0700, Josh Don wrote:
> Hey Peter,
>
> > +static struct cpumask *root_domain_allowed(struct cpumask *newmask,
> > + struct cpumask *scratch,
> > + struct cpumask *valid)
> > +{
> > + struct root_domain *rd;
> > + struct cpumask *mask;
> > + struct rq *rq;
> > +
> > + int first = cpumask_first_and(newmask, valid);
> > + if (first >= nr_cpu_ids)
> > + return NULL;
>
> This picks the first arbitrarily, but isn't it possible for the mask
> to span both isolated and non-isolated cpus? In which case, the rest
> of this just boils down to chance (ie. whatever the span happens to be
> for the first cpu here)?
Yes, but it matches historical behaviour :/
Like I said, ideally we'd flat out reject masks like this, but...
Powered by blists - more mailing lists