[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABk29NvHx2saNLdYmQgt31R8W28p7=GUtXiiupgE5czXRBAx5g@mail.gmail.com>
Date: Wed, 11 Oct 2023 16:55:35 -0700
From: Josh Don <joshdon@...gle.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ankit Jain <ankitja@...are.com>, yury.norov@...il.com,
andriy.shevchenko@...ux.intel.com, linux@...musvillemoes.dk,
qyousef@...alina.io, pjt@...gle.com, bristot@...hat.com,
vschneid@...hat.com, linux-kernel@...r.kernel.org,
namit@...are.com, amakhalov@...are.com, srinidhir@...are.com,
vsirnapalli@...are.com, vbrahmajosyula@...are.com,
akaher@...are.com, srivatsa@...il.mit.edu
Subject: Re: [PATCH RFC] cpumask: Randomly distribute the tasks within
affinity mask
Hey Peter,
> +static struct cpumask *root_domain_allowed(struct cpumask *newmask,
> + struct cpumask *scratch,
> + struct cpumask *valid)
> +{
> + struct root_domain *rd;
> + struct cpumask *mask;
> + struct rq *rq;
> +
> + int first = cpumask_first_and(newmask, valid);
> + if (first >= nr_cpu_ids)
> + return NULL;
This picks the first arbitrarily, but isn't it possible for the mask
to span both isolated and non-isolated cpus? In which case, the rest
of this just boils down to chance (ie. whatever the span happens to be
for the first cpu here)?
Powered by blists - more mailing lists