[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YNHr81D/fPD2Q8kM@hirez.programming.kicks-ass.net>
Date: Tue, 22 Jun 2021 15:56:03 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: hannes@...xchg.org, mingo@...hat.com, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, matthias.bgg@...il.com, minchan@...gle.com,
timmurray@...gle.com, yt.chang@...iatek.com, wenju.xu@...iatek.com,
jonathan.jmchen@...iatek.com, linux-kernel@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org,
linux-mediatek@...ts.infradead.org, kernel-team@...roid.com
Subject: Re: [PATCH 1/1] psi: stop relying on timer_pending for poll_work
rescheduling
On Thu, Jun 17, 2021 at 02:26:54PM -0700, Suren Baghdasaryan wrote:
> Psi polling mechanism is trying to minimize the number of wakeups to
> run psi_poll_work and is currently relying on timer_pending() to detect
> when this work is already scheduled. This provides a window of opportunity
> for psi_group_change to schedule an immediate psi_poll_work after
> poll_timer_fn got called but before psi_poll_work could reschedule itself.
> Below is the depiction of this entire window:
>
> poll_timer_fn
> wake_up_interruptible(&group->poll_wait);
>
> psi_poll_worker
> wait_event_interruptible(group->poll_wait, ...)
> psi_poll_work
> psi_schedule_poll_work
> if (timer_pending(&group->poll_timer)) return;
> ...
> mod_timer(&group->poll_timer, jiffies + delay);
>
> Prior to 461daba06bdc we used to rely on poll_scheduled atomic which was
> reset and set back inside psi_poll_work and therefore this race window
> was much smaller.
> The larger window causes increased number of wakeups and our partners
> report visible power regression of ~10mA after applying 461daba06bdc.
> Bring back the poll_scheduled atomic and make this race window even
> narrower by resetting poll_scheduled only when we reach polling expiration
> time. This does not completely eliminate the possibility of extra wakeups
> caused by a race with psi_group_change however it will limit it to the
> worst case scenario of one extra wakeup per every tracking window (0.5s
> in the worst case).
> By tracing the number of immediate rescheduling attempts performed by
> psi_group_change and the number of these attempts being blocked due to
> psi monitor being already active, we can assess the effects of this change:
>
> Before the patch:
> Run#1 Run#2 Run#3
> Immediate reschedules attempted: 684365 1385156 1261240
> Immediate reschedules blocked: 682846 1381654 1258682
> Immediate reschedules (delta): 1519 3502 2558
> Immediate reschedules (% of attempted): 0.22% 0.25% 0.20%
>
> After the patch:
> Run#1 Run#2 Run#3
> Immediate reschedules attempted: 882244 770298 426218
> Immediate reschedules blocked: 881996 769796 426074
> Immediate reschedules (delta): 248 502 144
> Immediate reschedules (% of attempted): 0.03% 0.07% 0.03%
>
> The number of non-blocked immediate reschedules dropped from 0.22-0.25%
> to 0.03-0.07%. The drop is attributed to the decrease in the race window
> size and the fact that we allow this race only when psi monitors reach
> polling window expiration time.
>
> Fixes: 461daba06bdc ("psi: eliminate kthread_worker from psi trigger scheduling mechanism")
> Reported-by: Kathleen Chang <yt.chang@...iatek.com>
> Reported-by: Wenju Xu <wenju.xu@...iatek.com>
> Reported-by: Jonathan Chen <jonathan.jmchen@...iatek.com>
> Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
Johannes?
> -/* Schedule polling if it's not already scheduled. */
> -static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay)
> +/* Schedule polling if it's not already scheduled or forced. */
> +static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay,
> + bool force)
> {
> struct task_struct *task;
>
> - /*
> - * Do not reschedule if already scheduled.
> - * Possible race with a timer scheduled after this check but before
> - * mod_timer below can be tolerated because group->polling_next_update
> - * will keep updates on schedule.
> - */
> - if (timer_pending(&group->poll_timer))
> + /* cmpxchg should be called even when !force to set poll_scheduled */
> + if (atomic_cmpxchg(&group->poll_scheduled, 0, 1) != 0 && !force)
Do you care about memory ordering here? Afaict the whole thing is
supposed to be ordered by ->trigger_lock, so you don't.
Howver, if you did, the code seems to suggest you need a RELEASE
ordering, but cmpxchg() as used above can fail in which case it does't
provide anything.
Also, I think the more conventional way to write that might be:
if (atomic_xchg_relaxed(&group->poll_scheduled, 1) && !force)
Hmm?
Powered by blists - more mailing lists