lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170811094530.c6wrlk2k6n6zlvdo@hirez.programming.kicks-ass.net>
Date:   Fri, 11 Aug 2017 11:45:30 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     torvalds@...ux-foundation.org, will.deacon@....com,
        oleg@...hat.com, paulmck@...ux.vnet.ibm.com,
        benh@...nel.crashing.org, mpe@...erman.id.au, npiggin@...il.com
Cc:     linux-kernel@...r.kernel.org, mingo@...nel.org,
        stern@...land.harvard.edu, Russell King <linux@...linux.org.uk>,
        Heiko Carstens <heiko.carstens@...ibm.com>,
        Ralf Baechle <ralf@...ux-mips.org>,
        Vineet Gupta <vgupta@...opsys.com>,
        "David S. Miller" <davem@...emloft.net>,
        Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>
Subject: Re: [PATCH -v2 1/4] mm: Rework {set,clear,mm}_tlb_flush_pending()

On Wed, Aug 02, 2017 at 01:38:38PM +0200, Peter Zijlstra wrote:
>  	/*
> +	 * The only time this value is relevant is when there are indeed pages
> +	 * to flush. And we'll only flush pages after changing them, which
> +	 * requires the PTL.
> +	 *
> +	 * So the ordering here is:
> +	 *
> +	 *	mm->tlb_flush_pending = true;
> +	 *	spin_lock(&ptl);
> +	 *	...
> +	 *	set_pte_at();
> +	 *	spin_unlock(&ptl);
> +	 *
> +	 *				spin_lock(&ptl)
> +	 *				mm_tlb_flush_pending();
> +	 *				....

Crud, so while I was rebasing Nadav's patches I realized that this does
not in fact work for PPC and split PTL. Because the PPC lwsync relies on
the address dependency to actual produce the ordering.

Also, since Nadav switched to atomic_inc/atomic_dec, I'll send a patch
to add smp_mb__after_atomic(), and

> +	 *				spin_unlock(&ptl);
> +	 *
> +	 *	flush_tlb_range();
> +	 *	mm->tlb_flush_pending = false;
> +	 *
> +	 * So the =true store is constrained by the PTL unlock, and the =false
> +	 * store is constrained by the TLB invalidate.
>  	 */
>  }
>  /* Clearing is done after a TLB flush, which also provides a barrier. */
>  static inline void clear_tlb_flush_pending(struct mm_struct *mm)
>  {
> +	/* see set_tlb_flush_pending */

smp_mb__before_atomic() here. That also avoids the whole reliance on the
tlb_flush nonsense.

It will overstuff on barriers on some platforms though :/

>  	mm->tlb_flush_pending = false;
>  }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ