[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251209111244.GJ3707891@noisy.programming.kicks-ass.net>
Date: Tue, 9 Dec 2025 12:12:44 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Tim Chen <tim.c.chen@...ux.intel.com>
Cc: Ingo Molnar <mingo@...hat.com>,
K Prateek Nayak <kprateek.nayak@....com>,
"Gautham R . Shenoy" <gautham.shenoy@....com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>,
Madadi Vineeth Reddy <vineethr@...ux.ibm.com>,
Hillf Danton <hdanton@...a.com>,
Shrikanth Hegde <sshegde@...ux.ibm.com>,
Jianyong Wu <jianyong.wu@...look.com>,
Yangyu Chen <cyy@...self.name>,
Tingyin Duan <tingyin.duan@...il.com>,
Vern Hao <vernhao@...cent.com>, Vern Hao <haoxing990@...il.com>,
Len Brown <len.brown@...el.com>, Aubrey Li <aubrey.li@...el.com>,
Zhao Liu <zhao1.liu@...el.com>, Chen Yu <yu.chen.surf@...il.com>,
Chen Yu <yu.c.chen@...el.com>,
Adam Li <adamli@...amperecomputing.com>,
Aaron Lu <ziqianlu@...edance.com>, Tim Chen <tim.c.chen@...el.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 01/23] sched/cache: Introduce infrastructure for
cache-aware load balancing
On Wed, Dec 03, 2025 at 03:07:20PM -0800, Tim Chen wrote:
> Minor fix in task_tick_cache() to use
> if (mm->mm_sched_epoch >= rq->cpu_epoch)
> to avoid mm_sched_epoch going backwards.
> +static void task_tick_cache(struct rq *rq, struct task_struct *p)
> +{
> + struct callback_head *work = &p->cache_work;
> + struct mm_struct *mm = p->mm;
> +
> + if (!sched_cache_enabled())
> + return;
> +
> + if (!mm || !mm->pcpu_sched)
> + return;
> +
> + /* avoid moving backwards */
> + if (mm->mm_sched_epoch >= rq->cpu_epoch)
> + return;
IIRC this was supposed to be able to wrap; which then means you should
write it like:
if ((mm->mm_sched_epoch - rq->cpu_epoch) >= 0)
return;
or somesuch.
> +
> + guard(raw_spinlock)(&mm->mm_sched_lock);
> +
> + if (work->next == work) {
> + task_work_add(p, work, TWA_RESUME);
> + WRITE_ONCE(mm->mm_sched_epoch, rq->cpu_epoch);
> + }
> +}
Powered by blists - more mailing lists