lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180717113330.GU2476@hirez.programming.kicks-ass.net>
Date:   Tue, 17 Jul 2018 13:33:30 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     songliubraving@...com, linux-kernel@...r.kernel.org,
        dave.hansen@...el.com, hpa@...or.com, riel@...riel.com,
        tglx@...utronix.de, mingo@...nel.org, torvalds@...ux-foundation.org
Cc:     linux-tip-commits@...r.kernel.org
Subject: Re: [tip:x86/mm] x86/mm/tlb: Make lazy TLB mode lazier

On Tue, Jul 17, 2018 at 02:35:08AM -0700, tip-bot for Rik van Riel wrote:
> @@ -242,17 +244,40 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
>  			   next->context.ctx_id);
>  
>  		/*
> +		 * Even in lazy TLB mode, the CPU should stay set in the
> +		 * mm_cpumask. The TLB shootdown code can figure out from
> +		 * from cpu_tlbstate.is_lazy whether or not to send an IPI.
>  		 */
>  		if (WARN_ON_ONCE(real_prev != &init_mm &&
>  				 !cpumask_test_cpu(cpu, mm_cpumask(next))))
>  			cpumask_set_cpu(cpu, mm_cpumask(next));
>  
> +		/*
> +		 * If the CPU is not in lazy TLB mode, we are just switching
> +		 * from one thread in a process to another thread in the same
> +		 * process. No TLB flush required.
> +		 */
> +		if (!was_lazy)
> +			return;
> +
> +		/*
> +		 * Read the tlb_gen to check whether a flush is needed.
> +		 * If the TLB is up to date, just use it.
> +		 * The barrier synchronizes with the tlb_gen increment in
> +		 * the TLB shootdown code.
> +		 */
> +		smp_mb();

What exactly is this smp_mb() ordering? The above comment is
insufficient. Is it the cpumask_set_cpu() vs the atomic64_read() ?

If so, should this not be smp_mb__after_atomic() (iow a NO-OP on x86)

If it is not so, please fix the comment to explain things.

> +		next_tlb_gen = atomic64_read(&next->context.tlb_gen);
> +		if (this_cpu_read(cpu_tlbstate.ctxs[prev_asid].tlb_gen) ==
> +				next_tlb_gen)
> +			return;
> +
> +		/*
> +		 * TLB contents went out of date while we were in lazy
> +		 * mode. Fall through to the TLB switching code below.
> +		 */
> +		new_asid = prev_asid;
> +		need_flush = true;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ