[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201211212847.GA595642@lothringen>
Date: Fri, 11 Dec 2020 22:28:47 +0100
From: Frederic Weisbecker <frederic@...nel.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>,
"Paul E. McKenney" <paulmck@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Will Deacon <will@...nel.org>
Subject: Re: tick/sched: Make jiffies update quick check more robust
On Fri, Dec 04, 2020 at 11:55:19AM +0100, Thomas Gleixner wrote:
> The quick check in tick_do_update_jiffies64() whether jiffies need to be
> updated is not really correct under all circumstances and on all
> architectures, especially not on 32bit systems.
>
> The quick check does:
>
> if (now < READ_ONCE(tick_next_period))
> return;
>
> and the counterpart in the update is:
>
> WRITE_ONCE(tick_next_period, next_update_time);
>
> This has two problems:
>
> 1) On weakly ordered architectures there is no guarantee that the stores
> before the WRITE_ONCE() are visible which means that other CPUs can
> operate on a stale jiffies value.
>
> 2) On 32bit the store of tick_next_period which is an u64 is split into
> two 32bit stores. If the first 32bit store advances tick_next_period
> far out and the second 32bit store is delayed (virt, NMI ...) then
> jiffies will become stale until the second 32bit store happens.
>
> Address this by seperating the handling for 32bit and 64bit.
>
> On 64bit problem #1 is addressed by replacing READ_ONCE() / WRITE_ONCE()
> with smp_load_acquire() / smp_store_release().
>
> On 32bit problem #2 is addressed by protecting the quick check with the
> jiffies sequence counter. The load and stores can be plain because the
> sequence count mechanics provides the required barriers already.
>
> Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Looks very good! Thanks!
Reviewed-by: Frederic Weisbecker <frederic@...nel.org>
Powered by blists - more mailing lists