[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210427083724.840364566@linutronix.de>
Date: Tue, 27 Apr 2021 10:25:45 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: LKML <linux-kernel@...r.kernel.org>
Cc: Anna-Maria Behnsen <anna-maria@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Marcelo Tosatti <mtosatti@...hat.com>,
Frederic Weisbecker <frederic@...nel.org>,
Peter Xu <peterx@...hat.com>,
Nitesh Narayan Lal <nitesh@...hat.com>,
Alex Belits <abelits@...vell.com>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
John Stultz <john.stultz@...aro.org>
Subject: [patch 8/8] hrtimer: Avoid more SMP function calls in clock_was_set()
There are more indicators whether the SMP function calls on clock_was_set()
can be avoided:
- When the remote CPU is currently handling hrtimer_interrupt(). In
that case the remote CPU will update offsets and reevaluate the timer
bases before reprogramming anyway, so nothing to do.
By unconditionally updating the offsets the following checks are possible:
- When the offset update already happened on the remote CPU then the
remote update attempt will yield the same seqeuence number and no
IPI is required.
- After updating it can be checked whether the first expiring timer in
the affected clock bases moves before the first expiring (softirq)
timer of the CPU. If that's not the case then sending the IPI is not
required.
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
---
kernel/time/hrtimer.c | 66 +++++++++++++++++++++++++++++++++++++++++++-------
1 file changed, 57 insertions(+), 9 deletions(-)
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -880,6 +880,60 @@ static void hrtimer_reprogram(struct hrt
tick_program_event(expires, 1);
}
+static bool update_needs_ipi(struct hrtimer_cpu_base *cpu_base,
+ unsigned int active)
+{
+ struct hrtimer_clock_base *base;
+ unsigned int seq;
+ ktime_t expires;
+
+ /*
+ * If the remote CPU is currently handling an hrtimer interrupt, it
+ * will update and reevaluate the first expiring timer of all clock
+ * bases before reprogramming. Nothing to do here.
+ */
+ if (cpu_base->in_hrtirq)
+ return false;
+
+ /*
+ * Update the base offsets unconditionally so the following quick
+ * check whether the SMP function call is required works.
+ */
+ seq = cpu_base->clock_was_set_seq;
+ hrtimer_update_base(cpu_base);
+
+ /*
+ * If the sequence did not change over the update then the
+ * remote CPU already handled it.
+ */
+ if (seq == cpu_base->clock_was_set_seq)
+ return false;
+
+ /*
+ * Walk the affected clock bases and check whether the first expiring
+ * timer in a clock base is moving ahead of the first expiring timer of
+ * @cpu_base. If so, the IPI must be invoked because per CPU clock
+ * event devices cannot be remotely reprogrammed.
+ */
+ for_each_active_base(base, cpu_base, active) {
+ struct timerqueue_node *next;
+
+ next = timerqueue_getnext(&base->active);
+ expires = ktime_sub(next->expires, base->offset);
+ if (expires < cpu_base->expires_next)
+ return true;
+
+ /* Extra check for softirq clock bases */
+ if (base->clockid < HRTIMER_BASE_MONOTONIC_SOFT)
+ continue;
+ if (cpu_base->softirq_activated)
+ continue;
+ if (expires < cpu_base->softirq_expires_next)
+ return true;
+ }
+ return false;
+}
+
/*
* Clock was set. This might affect CLOCK_REALTIME, CLOCK_TAI and
* CLOCK_BOOTTIME (for late sleep time injection).
@@ -914,16 +968,10 @@ void clock_was_set(unsigned int bases)
unsigned long flags;
raw_spin_lock_irqsave(&cpu_base->lock, flags);
- /*
- * Only send the IPI when there are timers queued in one of
- * the affected clock bases. Otherwise update the base
- * remote to ensure that the next enqueue of a timer on
- * such a clock base will see the correct offsets.
- */
- if (cpu_base->active_bases & bases)
+
+ if (update_needs_ipi(cpu_base, bases))
cpumask_set_cpu(cpu, mask);
- else
- hrtimer_update_base(cpu_base);
+
raw_spin_unlock_irqrestore(&cpu_base->lock, flags);
}
Powered by blists - more mailing lists