lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3c63a064-eeb1-0013-c345-a9fb71006357@linux.intel.com>
Date:   Fri, 22 Jun 2018 10:23:25 -0700
From:   Dave Hansen <dave.hansen@...ux.intel.com>
To:     Rik van Riel <riel@...riel.com>, linux-kernel@...r.kernel.org
Cc:     86@...r.kernel.org, luto@...nel.org, mingo@...nel.org,
        tglx@...utronix.de, efault@....de, songliubraving@...com,
        kernel-team@...com
Subject: Re: [PATCH 5/7] x86,tlb: only send page table free TLB flush to lazy
 TLB CPUs

On 06/20/2018 12:56 PM, Rik van Riel wrote:
> CPUs in TLBSTATE_OK have either received TLB flush IPIs earlier on during
> the munmap (when the user memory was unmapped), or have context switched
> and reloaded during that stage of the munmap.
> 
> Page table free TLB flushes only need to be sent to CPUs in lazy TLB
> mode, which TLB contents might not yet be up to date yet.
> 
> Signed-off-by: Rik van Riel <riel@...riel.com>
> Tested-by: Song Liu <songliubraving@...com>
> ---
>  arch/x86/mm/tlb.c | 26 +++++++++++++++++++++++---
>  1 file changed, 23 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 4b27d8469848..40b00055c883 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -752,11 +752,31 @@ void tlb_flush_remove_tables_local(void *arg)
>  void tlb_flush_remove_tables(struct mm_struct *mm)
>  {
>  	int cpu = get_cpu();
> +	cpumask_var_t varmask;
> +
> +	if (cpumask_any_but(mm_cpumask(mm), cpu) >= nr_cpu_ids)
> +		return;

Oh, man, we need a wrapper for that little nugget.  I was really
scratching my head about what this was doing until I saw the
cpumask_any_but() comment:

	 * Returns >= nr_cpu_ids if no cpus set.


> +	if (!zalloc_cpumask_var(&varmask, GFP_ATOMIC)) {
> +		/* Flush the TLB on all CPUs that have this mm loaded. */
> +		smp_call_function_many(mm_cpumask(mm), tlb_flush_remove_tables_local, (void *)mm, 1);
> +	}

Maybe:

	/*
	 * If the cpumask allocation fails, do a brute-force
	 * flush on all CPUs that have this mm loaded.
	 */

Also, don't you need a return inside the if(){} block here?

>  	/*
> -	 * XXX: this really only needs to be called for CPUs in lazy TLB mode.
> +	 * CPUs in TLBSTATE_OK either received a TLB flush IPI while the user
> +	 * pages in this address range were unmapped, or have context switched
> +	 * and reloaded %CR3 since then.
> +	 *
> +	 * Shootdown IPIs at page table freeing time only need to be sent to
> +	 * CPUs that may have out of date TLB contents.
>  	 */
> -	if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids)
> -		smp_call_function_many(mm_cpumask(mm), tlb_flush_remove_tables_local, (void *)mm, 1);
> +	for_each_cpu(cpu, mm_cpumask(mm)) {
> +		if (per_cpu(cpu_tlbstate.state, cpu) != TLBSTATE_OK)
> +			cpumask_set_cpu(cpu, varmask);
> +	}
> +
> +	smp_call_function_many(varmask, tlb_flush_remove_tables_local, (void *)mm, 1);
> +	free_cpumask_var(varmask);
>  }

Would this look any better if you wrapped the mask manipulation in a:

void mm_fill_lazy_tlb_cpu_mask(struct mm_struct *mm,
			       cpumask_var_t *varmask)

helper, including the explanation comments?

	cpumask_var_t lazy_cpus;
	
	... cpumask allocation/fallback here

	mm_fill_lazy_tlb_cpu_mask(mm, &lazy_cpus);
	smp_call_function_many(lazy_cpus, tlb_flush_remove_...

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ