lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 06 Mar 2021 12:12:54 -0000
From:   "tip-bot2 for Nadav Amit" <tip-bot2@...utronix.de>
To:     linux-tip-commits@...r.kernel.org
Cc:     Dave Hansen <dave.hansen@...ux.intel.com>,
        Nadav Amit <namit@...are.com>, Ingo Molnar <mingo@...nel.org>,
        x86@...nel.org, linux-kernel@...r.kernel.org
Subject: [tip: x86/mm] x86/mm/tlb: Do not make is_lazy dirty for no reason

The following commit has been merged into the x86/mm branch of tip:

Commit-ID:     09c5272e48614a30598e759c3c7bed126d22037d
Gitweb:        https://git.kernel.org/tip/09c5272e48614a30598e759c3c7bed126d22037d
Author:        Nadav Amit <namit@...are.com>
AuthorDate:    Sat, 20 Feb 2021 15:17:09 -08:00
Committer:     Ingo Molnar <mingo@...nel.org>
CommitterDate: Sat, 06 Mar 2021 12:59:10 +01:00

x86/mm/tlb: Do not make is_lazy dirty for no reason

Blindly writing to is_lazy for no reason, when the written value is
identical to the old value, makes the cacheline dirty for no reason.
Avoid making such writes to prevent cache coherency traffic for no
reason.

Suggested-by: Dave Hansen <dave.hansen@...ux.intel.com>
Signed-off-by: Nadav Amit <namit@...are.com>
Signed-off-by: Ingo Molnar <mingo@...nel.org>
Reviewed-by: Dave Hansen <dave.hansen@...ux.intel.com>
Link: https://lore.kernel.org/r/20210220231712.2475218-7-namit@vmware.com
---
 arch/x86/mm/tlb.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 345a0af..17ec4bf 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -469,7 +469,8 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
 		__flush_tlb_all();
 	}
 #endif
-	this_cpu_write(cpu_tlbstate_shared.is_lazy, false);
+	if (was_lazy)
+		this_cpu_write(cpu_tlbstate_shared.is_lazy, false);
 
 	/*
 	 * The membarrier system call requires a full memory barrier and

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ