lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5418a661-dbd0-46e9-8ef7-b1c5a34acce3@redhat.com>
Date: Mon, 3 Mar 2025 09:49:41 +0100
From: David Hildenbrand <david@...hat.com>
To: Ryan Roberts <ryan.roberts@....com>,
 Andrew Morton <akpm@...ux-foundation.org>,
 "David S. Miller" <davem@...emloft.net>,
 Andreas Larsson <andreas@...sler.com>, Juergen Gross <jgross@...e.com>,
 Boris Ostrovsky <boris.ostrovsky@...cle.com>,
 Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
 Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
 "H. Peter Anvin" <hpa@...or.com>,
 "Matthew Wilcox (Oracle)" <willy@...radead.org>,
 Catalin Marinas <catalin.marinas@....com>
Cc: linux-mm@...ck.org, sparclinux@...r.kernel.org,
 xen-devel@...ts.xenproject.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1 1/4] mm: Fix lazy mmu docs and usage

On 02.03.25 15:55, Ryan Roberts wrote:
> The docs, implementations and use of arch_[enter|leave]_lazy_mmu_mode()
> is a bit of a mess (to put it politely). There are a number of issues
> related to nesting of lazy mmu regions and confusion over whether the
> task, when in a lazy mmu region, is preemptible or not. Fix all the
> issues relating to the core-mm. Follow up commits will fix the
> arch-specific implementations. 3 arches implement lazy mmu; powerpc,
> sparc and x86.
> 
> When arch_[enter|leave]_lazy_mmu_mode() was first introduced by commit
> 6606c3e0da53 ("[PATCH] paravirt: lazy mmu mode hooks.patch"), it was
> expected that lazy mmu regions would never nest and that the appropriate
> page table lock(s) would be held while in the region, thus ensuring the
> region is non-preemptible. Additionally lazy mmu regions were only used
> during manipulation of user mappings.
> 
> Commit 38e0edb15bd0 ("mm/apply_to_range: call pte function with lazy
> updates") started invoking the lazy mmu mode in apply_to_pte_range(),
> which is used for both user and kernel mappings. For kernel mappings the
> region is no longer protected by any lock so there is no longer any
> guarantee about non-preemptibility. Additionally, for RT configs, the
> holding the PTL only implies no CPU migration, it doesn't prevent
> preemption.
> 
> Commit bcc6cc832573 ("mm: add default definition of set_ptes()") added
> arch_[enter|leave]_lazy_mmu_mode() to the default implementation of
> set_ptes(), used by x86. So after this commit, lazy mmu regions can be
> nested. Additionally commit 1a10a44dfc1d ("sparc64: implement the new
> page table range API") and commit 9fee28baa601 ("powerpc: implement the
> new page table range API") did the same for the sparc and powerpc
> set_ptes() overrides.
> 
> powerpc couldn't deal with preemption so avoids it in commit
> b9ef323ea168 ("powerpc/64s: Disable preemption in hash lazy mmu mode"),
> which explicitly disables preemption for the whole region in its
> implementation. x86 can support preemption (or at least it could until
> it tried to add support nesting; more on this below). Sparc looks to be
> totally broken in the face of preemption, as far as I can tell.
> 
> powewrpc can't deal with nesting, so avoids it in commit 47b8def9358c
> ("powerpc/mm: Avoid calling arch_enter/leave_lazy_mmu() in set_ptes"),
> which removes the lazy mmu calls from its implementation of set_ptes().
> x86 attempted to support nesting in commit 49147beb0ccb ("x86/xen: allow
> nesting of same lazy mode") but as far as I can tell, this breaks its
> support for preemption.
> 
> In short, it's all a mess; the semantics for
> arch_[enter|leave]_lazy_mmu_mode() are not clearly defined and as a
> result the implementations all have different expectations, sticking
> plasters and bugs.
> 
> arm64 is aiming to start using these hooks, so let's clean everything up
> before adding an arm64 implementation. Update the documentation to state
> that lazy mmu regions can never be nested, must not be called in
> interrupt context and preemption may or may not be enabled for the
> duration of the region.
> 
> Additionally, update the way arch_[enter|leave]_lazy_mmu_mode() is
> called in pagemap_scan_pmd_entry() to follow the normal pattern of
> holding the ptl for user space mappings. As a result the scope is
> reduced to only the pte table, but that's where most of the performance
> win is. While I believe there wasn't technically a bug here, the
> original scope made it easier to accidentally nest or, worse,
> accidentally call something like kmap() which would expect an immediate
> mode pte modification but it would end up deferred.
> 
> arch-specific fixes to conform to the new spec will proceed this one.
> 
> These issues were spotted by code review and I have no evidence of
> issues being reported in the wild.
> 

All looking good to me!

Acked-by: David Hildenbrand <david@...hat.com>

-- 
Cheers,

David / dhildenb



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ