[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3c3cc30f931a61eda1aed056abc03b0839291781.camel@linux.intel.com>
Date: Wed, 10 Dec 2025 10:36:30 -0800
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>, K Prateek Nayak
<kprateek.nayak@....com>, "Gautham R . Shenoy" <gautham.shenoy@....com>,
Vincent Guittot <vincent.guittot@...aro.org>, Juri Lelli
<juri.lelli@...hat.com>, Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>, Mel
Gorman <mgorman@...e.de>, Valentin Schneider <vschneid@...hat.com>,
Madadi Vineeth Reddy <vineethr@...ux.ibm.com>, Hillf Danton
<hdanton@...a.com>, Shrikanth Hegde <sshegde@...ux.ibm.com>, Jianyong Wu
<jianyong.wu@...look.com>, Yangyu Chen <cyy@...self.name>, Tingyin Duan
<tingyin.duan@...il.com>, Vern Hao <vernhao@...cent.com>, Vern Hao
<haoxing990@...il.com>, Len Brown <len.brown@...el.com>, Aubrey Li
<aubrey.li@...el.com>, Zhao Liu <zhao1.liu@...el.com>, Chen Yu
<yu.chen.surf@...il.com>, Chen Yu <yu.c.chen@...el.com>, Adam Li
<adamli@...amperecomputing.com>, Aaron Lu <ziqianlu@...edance.com>, Tim
Chen <tim.c.chen@...el.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 07/23] sched/cache: Introduce per runqueue task LLC
preference counter
On Wed, 2025-12-10 at 13:43 +0100, Peter Zijlstra wrote:
> On Wed, Dec 03, 2025 at 03:07:26PM -0800, Tim Chen wrote:
>
> > +static int resize_llc_pref(void)
> > +{
> > + unsigned int *__percpu *tmp_llc_pref;
> > + int i, ret = 0;
> > +
> > + if (new_max_llcs <= max_llcs)
> > + return 0;
> > +
> > + /*
> > + * Allocate temp percpu pointer for old llc_pref,
> > + * which will be released after switching to the
> > + * new buffer.
> > + */
> > + tmp_llc_pref = alloc_percpu_noprof(unsigned int *);
> > + if (!tmp_llc_pref)
> > + return -ENOMEM;
> > +
> > + for_each_present_cpu(i)
> > + *per_cpu_ptr(tmp_llc_pref, i) = NULL;
> > +
> > + /*
> > + * Resize the per rq nr_pref_llc buffer and
> > + * switch to this new buffer.
> > + */
> > + for_each_present_cpu(i) {
> > + struct rq_flags rf;
> > + unsigned int *new;
> > + struct rq *rq;
> > +
> > + rq = cpu_rq(i);
> > + new = alloc_new_pref_llcs(rq->nr_pref_llc, per_cpu_ptr(tmp_llc_pref, i));
> > + if (!new) {
> > + ret = -ENOMEM;
> > +
> > + goto release_old;
> > + }
> > +
> > + /*
> > + * Locking rq ensures that rq->nr_pref_llc values
> > + * don't change with new task enqueue/dequeue
> > + * when we repopulate the newly enlarged array.
> > + */
> > + rq_lock_irqsave(rq, &rf);
> > + populate_new_pref_llcs(rq->nr_pref_llc, new);
> > + rq->nr_pref_llc = new;
> > + rq_unlock_irqrestore(rq, &rf);
> > + }
> > +
> > +release_old:
> > + /*
> > + * Load balance is done under rcu_lock.
> > + * Wait for load balance before and during resizing to
> > + * be done. They may refer to old nr_pref_llc[]
> > + * that hasn't been resized.
> > + */
> > + synchronize_rcu();
> > + for_each_present_cpu(i)
> > + kfree(*per_cpu_ptr(tmp_llc_pref, i));
> > +
> > + free_percpu(tmp_llc_pref);
> > +
> > + /* succeed and update */
> > + if (!ret)
> > + max_llcs = new_max_llcs;
> > +
> > + return ret;
> > +}
>
> > @@ -2674,6 +2787,8 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
> > if (has_cluster)
> > static_branch_inc_cpuslocked(&sched_cluster_active);
> >
> > + resize_llc_pref();
> > +
> > if (rq && sched_debug_verbose)
> > pr_info("root domain span: %*pbl\n", cpumask_pr_args(cpu_map));
>
> I suspect people will hate on you for that synchronize_rcu() in there.
>
> Specifically, we do build_sched_domain() for every CPU brought online,
> this means booting 512 CPUs now includes 512 sync_rcu()s.
> Worse, IIRC sync_rcu() is O(n) (or worse -- could be n*ln(n)) in number
> of CPUs, so the total thing will be O(n^2) (or worse) for bringing CPUs
> online.
>
>
Though we only do sychronize_rcu in resize_llc_pref() when we encounter a new LLC,
and need a larger array of LLCs, and not on
every CPU. That said, I agree that free is better done in a RCU call back
to avoid scynchronize_rcu overhead.
Tim
Powered by blists - more mailing lists