lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <173858700223.10177.5958477749533350297.tip-bot2@tip-bot2>
Date: Mon, 03 Feb 2025 12:50:02 -0000
From: "tip-bot2 for Jann Horn" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Jann Horn <jannh@...gle.com>,
 "Peter Zijlstra (Intel)" <peterz@...radead.org>, stable@...r.kernel.org,
 x86@...nel.org, linux-kernel@...r.kernel.org
Subject:
 [tip: x86/mm] x86/mm: Fix flush_tlb_range() when used for zapping normal PMDs

The following commit has been merged into the x86/mm branch of tip:

Commit-ID:     3ef938c3503563bfc2ac15083557f880d29c2e64
Gitweb:        https://git.kernel.org/tip/3ef938c3503563bfc2ac15083557f880d29c2e64
Author:        Jann Horn <jannh@...gle.com>
AuthorDate:    Fri, 03 Jan 2025 19:39:38 +01:00
Committer:     Peter Zijlstra <peterz@...radead.org>
CommitterDate: Mon, 03 Feb 2025 11:46:03 +01:00

x86/mm: Fix flush_tlb_range() when used for zapping normal PMDs

On the following path, flush_tlb_range() can be used for zapping normal
PMD entries (PMD entries that point to page tables) together with the PTE
entries in the pointed-to page table:

    collapse_pte_mapped_thp
      pmdp_collapse_flush
        flush_tlb_range

The arm64 version of flush_tlb_range() has a comment describing that it can
be used for page table removal, and does not use any last-level
invalidation optimizations. Fix the X86 version by making it behave the
same way.

Currently, X86 only uses this information for the following two purposes,
which I think means the issue doesn't have much impact:

 - In native_flush_tlb_multi() for checking if lazy TLB CPUs need to be
   IPI'd to avoid issues with speculative page table walks.
 - In Hyper-V TLB paravirtualization, again for lazy TLB stuff.

The patch "x86/mm: only invalidate final translations with INVLPGB" which
is currently under review (see
<https://lore.kernel.org/all/20241230175550.4046587-13-riel@surriel.com/>)
would probably be making the impact of this a lot worse.

Fixes: 016c4d92cd16 ("x86/mm/tlb: Add freed_tables argument to flush_tlb_mm_range")
Signed-off-by: Jann Horn <jannh@...gle.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: stable@...r.kernel.org
Link: https://lkml.kernel.org/r/20250103-x86-collapse-flush-fix-v1-1-3c521856cfa6@google.com
---
 arch/x86/include/asm/tlbflush.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 02fc2aa..3da6451 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -242,7 +242,7 @@ void flush_tlb_multi(const struct cpumask *cpumask,
 	flush_tlb_mm_range((vma)->vm_mm, start, end,			\
 			   ((vma)->vm_flags & VM_HUGETLB)		\
 				? huge_page_shift(hstate_vma(vma))	\
-				: PAGE_SHIFT, false)
+				: PAGE_SHIFT, true)
 
 extern void flush_tlb_all(void);
 extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ