lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201207095907.GI3040@hirez.programming.kicks-ass.net>
Date:   Mon, 7 Dec 2020 10:59:07 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Thomas Gleixner <tglx@...utronix.de>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Frederic Weisbecker <frederic@...nel.org>,
        "Paul E. McKenney" <paulmck@...nel.org>,
        Will Deacon <will@...nel.org>
Subject: Re: tick/sched: Make jiffies update quick check more robust

On Fri, Dec 04, 2020 at 11:55:19AM +0100, Thomas Gleixner wrote:
>  	/*
> +	 * 64bit can do a quick check without holding jiffies lock and
> +	 * without looking at the sequence count. The smp_load_acquire()
>  	 * pairs with the update done later in this function.
>  	 *
> +	 * 32bit cannot do that because the store of tick_next_period
> +	 * consists of two 32bit stores and the first store could move it
> +	 * to a random point in the future.
>  	 */
> +	if (IS_ENABLED(CONFIG_64BIT)) {
> +		if (ktime_before(now, smp_load_acquire(&tick_next_period)))
> +			return;

Explicit ACQUIRE

> +	} else {
> +		unsigned int seq;
> +
> +		/*
> +		 * Avoid contention on jiffies_lock and protect the quick
> +		 * check with the sequence count.
> +		 */
> +		do {
> +			seq = read_seqcount_begin(&jiffies_seq);
> +			nextp = tick_next_period;
> +		} while (read_seqcount_retry(&jiffies_seq, seq));
> +
> +		if (ktime_before(now, nextp))
> +			return;

Actually has an implicit ACQUIRE:

	read_seqcount_retry() implies smp_rmb(), which ensures
	LOAD->LOAD order, IOW any later load must happen after our
	@tick_next_period load.

	Then it has a control dependency on ktime_before(,nextp), which
	ensures LOAD->STORE order.

	Combined we have a LOAD->{LOAD,STORE} order on the
	@tick_next_period load, IOW ACQUIRE.

> +	}
>  
> +	/* Quick check failed, i.e. update is required. */
>  	raw_spin_lock(&jiffies_lock);

Another ACQUIRE, which means the above ACQUIRE only ensures we load the
lock value after?

Or are we trying to guarantee the caller is sure to observe the new
jiffies value if we return?

> +	/*
> +	 * Reevaluate with the lock held. Another CPU might have done the
> +	 * update already.
> +	 */
>  	if (ktime_before(now, tick_next_period)) {
>  		raw_spin_unlock(&jiffies_lock);
>  		return;
> @@ -112,11 +118,25 @@ static void tick_do_update_jiffies64(kti
>  	jiffies_64 += ticks;
>  
>  	/*
> +	 * Keep the tick_next_period variable up to date.
>  	 */
> +	nextp = ktime_add_ns(last_jiffies_update, TICK_NSEC);
> +
> +	if (IS_ENABLED(CONFIG_64BIT)) {
> +		/*
> +		 * Pairs with smp_load_acquire() in the lockless quick
> +		 * check above and ensures that the update to jiffies_64 is
> +		 * not reordered vs. the store to tick_next_period, neither
> +		 * by the compiler nor by the CPU.
> +		 */
> +		smp_store_release(&tick_next_period, nextp);
> +	} else {
> +		/*
> +		 * A plain store is good enough on 32bit as the quick check
> +		 * above is protected by the sequence count.
> +		 */
> +		tick_next_period = nextp;
> +	}
>  
>  	/*
>  	 * Release the sequence count. calc_global_load() below is not

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ