[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZdcnEGQK0VO39QT+@lothringen>
Date: Thu, 22 Feb 2024 11:50:56 +0100
From: Frederic Weisbecker <frederic@...nel.org>
To: Anna-Maria Behnsen <anna-maria@...utronix.de>
Cc: linux-kernel@...r.kernel.org, Peter Zijlstra <peterz@...radead.org>,
John Stultz <jstultz@...gle.com>,
Thomas Gleixner <tglx@...utronix.de>,
Eric Dumazet <edumazet@...gle.com>,
"Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
Arjan van de Ven <arjan@...radead.org>,
"Paul E . McKenney" <paulmck@...nel.org>,
Rik van Riel <riel@...riel.com>,
Steven Rostedt <rostedt@...dmis.org>,
Sebastian Siewior <bigeasy@...utronix.de>,
Giovanni Gherdovich <ggherdovich@...e.cz>,
Lukasz Luba <lukasz.luba@....com>,
"Gautham R . Shenoy" <gautham.shenoy@....com>,
Srinivas Pandruvada <srinivas.pandruvada@...el.com>,
K Prateek Nayak <kprateek.nayak@....com>,
Christian Loehle <christian.loehle@....com>
Subject: Re: [PATCH v11b 18/20] timers: Implement the hierarchical pull model
On Thu, Feb 22, 2024 at 11:37:10AM +0100, Anna-Maria Behnsen wrote:
> +static bool tmigr_inactive_up(struct tmigr_group *group,
> + struct tmigr_group *child,
> + void *ptr)
> +{
> + union tmigr_state curstate, newstate, childstate;
> + struct tmigr_walk *data = ptr;
> + bool walk_done;
> + u8 childmask;
> +
> + childmask = data->childmask;
> + curstate.state = atomic_read_acquire(&group->migr_state);
> + childstate.state = 0;
> +
> + for (;;) {
> + if (child)
> + childstate.state = atomic_read(&child->migr_state);
> +
> + newstate = curstate;
> + walk_done = true;
> +
> + /* Reset active bit when the child is no longer active */
> + if (!childstate.active)
> + newstate.active &= ~childmask;
> +
> + if (newstate.migrator == childmask) {
> + /*
> + * Find a new migrator for the group, because the child
> + * group is idle!
> + */
> + if (!childstate.active) {
> + unsigned long new_migr_bit, active = newstate.active;
> +
> + new_migr_bit = find_first_bit(&active, BIT_CNT);
> +
> + if (new_migr_bit != BIT_CNT) {
> + newstate.migrator = BIT(new_migr_bit);
> + } else {
> + newstate.migrator = TMIGR_NONE;
> +
> + /* Changes need to be propagated */
> + walk_done = false;
> + }
> + }
> + }
> +
> + newstate.seq++;
> +
> + WARN_ON_ONCE((newstate.migrator != TMIGR_NONE) && !(newstate.active));
> +
> + if (atomic_try_cmpxchg(&group->migr_state, &curstate.state,
> + newstate.state))
> + break;
> + /*
> + * Add memory barrier to make sure child and group states order
> + * of read is preserved. This barrier is only required when
> + * atomic_try_cmpxchg() failed.
> + */
> + smp_mb__after_atomic();
Ideally memory barrier comments should tell:
1) What it orders (you did)
2) What it pairs with (the acquire)
And also the acquire barrier should be commented as well accordingly. This all
of course can be done in a further patch (I might lose track beyond 11c)
Reviewed-by: Frederic Weisbecker <frederic@...nel.org>
Powered by blists - more mailing lists