lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 25 May 2017 18:43:55 -0700
From:   Nadav Amit <nadav.amit@...il.com>
To:     Andy Lutomirski <luto@...nel.org>
Cc:     X86 ML <x86@...nel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Borislav Petkov <bpetkov@...e.de>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Mel Gorman <mgorman@...e.de>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        Rik van Riel <riel@...hat.com>,
        Dave Hansen <dave.hansen@...el.com>,
        Nadav Amit <namit@...are.com>, Michal Hocko <mhocko@...e.com>,
        Arjan van de Ven <arjan@...ux.intel.com>
Subject: Re: [PATCH v3 2/8] x86/mm: Change the leave_mm() condition for local
 TLB flushes


> On May 25, 2017, at 5:47 PM, Andy Lutomirski <luto@...nel.org> wrote:
> 
> On a remote TLB flush, we leave_mm() if we're TLBSTATE_LAZY.  For a
> local flush_tlb_mm_range(), we leave_mm() if !current->mm.  These
> are approximately the same condition -- the scheduler sets lazy TLB
> mode when switching to a thread with no mm.
> 
> I'm about to merge the local and remote flush code, but for ease of
> verifying and bisecting the patch, I want the local and remote flush
> behavior to match first.  This patch changes the local code to match
> the remote code.
> 
> Cc: Rik van Riel <riel@...hat.com>
> Cc: Dave Hansen <dave.hansen@...el.com>
> Cc: Nadav Amit <namit@...are.com>
> Cc: Michal Hocko <mhocko@...e.com>
> Cc: Arjan van de Ven <arjan@...ux.intel.com>
> Signed-off-by: Andy Lutomirski <luto@...nel.org>
> ---
> arch/x86/mm/tlb.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 776469cc54e0..3143c9a180e5 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -311,7 +311,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
> 		goto out;
> 	}
> 
> -	if (!current->mm) {
> +	if (this_cpu_read(cpu_tlbstate.state) != TLBSTATE_OK) {
> 		leave_mm(smp_processor_id());

Maybe it is an overkill, but you may want to have two variants: leave_mm()
and leave_mm_irq_off(). Currently, leave_mm() does not disable IRQs, but
in patch 6 it does. Here you indeed need to disable IRQs, but the cases
in prior to this patch - you do not.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ