[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250716104050.GR1613200@noisy.programming.kicks-ass.net>
Date: Wed, 16 Jul 2025 12:40:50 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Beata Michalska <beata.michalska@....com>
Cc: Chris Mason <clm@...a.com>, mingo@...hat.com, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
vschneid@...hat.com, linux-kernel@...r.kernel.org,
Johannes Weiner <hannes@...xchg.org>
Subject: Re: [PATCH v2 01/12] sched/psi: Optimize psi_group_change()
cpu_clock() usage
On Wed, Jul 16, 2025 at 08:53:01AM +0200, Beata Michalska wrote:
> Wouldn't it be enough to use SEQCNT_ZERO? Those are static per-cpu ones.
Yeah, I suppose that should work. The below builds, but I've not yet
observed the issue myself.
---
Subject: sched/psi: Fix psi_seq initialization
From: Peter Zijlstra <peterz@...radead.org>
Date: Tue, 15 Jul 2025 15:11:14 -0400
With the seqcount moved out of the group into a global psi_seq,
re-initializing the seqcount on group creation is causing seqcount
corruption.
Fixes: 570c8efd5eb7 ("sched/psi: Optimize psi_group_change() cpu_clock() usage")
Reported-by: Chris Mason <clm@...a.com>
Suggested-by: Beata Michalska <beata.michalska@....com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
kernel/sched/psi.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
--- a/kernel/sched/psi.c
+++ b/kernel/sched/psi.c
@@ -176,7 +176,7 @@ struct psi_group psi_system = {
.pcpu = &system_group_pcpu,
};
-static DEFINE_PER_CPU(seqcount_t, psi_seq);
+static DEFINE_PER_CPU(seqcount_t, psi_seq) = SEQCNT_ZERO(psi_seq);
static inline void psi_write_begin(int cpu)
{
@@ -204,11 +204,7 @@ static void poll_timer_fn(struct timer_l
static void group_init(struct psi_group *group)
{
- int cpu;
-
group->enabled = true;
- for_each_possible_cpu(cpu)
- seqcount_init(per_cpu_ptr(&psi_seq, cpu));
group->avg_last_update = sched_clock();
group->avg_next_update = group->avg_last_update + psi_period;
mutex_init(&group->avgs_lock);
Powered by blists - more mailing lists