lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87ikfn3yvs.ritesh.list@gmail.com>
Date: Thu, 06 Nov 2025 22:02:39 +0530
From: Ritesh Harjani (IBM) <ritesh.list@...il.com>
To: Alexander Gordeev <agordeev@...ux.ibm.com>
Cc: Kevin Brodsky <kevin.brodsky@....com>, linux-mm@...ck.org, linux-kernel@...r.kernel.org, 
	Andreas Larsson <andreas@...sler.com>, Andrew Morton <akpm@...ux-foundation.org>, 
	Boris Ostrovsky <boris.ostrovsky@...cle.com>, Borislav Petkov <bp@...en8.de>, 
	Catalin Marinas <catalin.marinas@....com>, Christophe Leroy <christophe.leroy@...roup.eu>, 
	Dave Hansen <dave.hansen@...ux.intel.com>, David Hildenbrand <david@...hat.com>, 
	"David S. Miller" <davem@...emloft.net>, David Woodhouse <dwmw2@...radead.org>, 
	"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>, Jann Horn <jannh@...gle.com>, 
	Juergen Gross <jgross@...e.com>, "Liam R. Howlett" <Liam.Howlett@...cle.com>, 
	Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, Madhavan Srinivasan <maddy@...ux.ibm.com>, 
	Michael Ellerman <mpe@...erman.id.au>, Michal Hocko <mhocko@...e.com>, Mike Rapoport <rppt@...nel.org>, 
	Nicholas Piggin <npiggin@...il.com>, Peter Zijlstra <peterz@...radead.org>, 
	Ryan Roberts <ryan.roberts@....com>, Suren Baghdasaryan <surenb@...gle.com>, 
	Thomas Gleixner <tglx@...utronix.de>, Vlastimil Babka <vbabka@...e.cz>, Will Deacon <will@...nel.org>, 
	Yeoreum Yun <yeoreum.yun@....com>, linux-arm-kernel@...ts.infradead.org, 
	linuxppc-dev@...ts.ozlabs.org, sparclinux@...r.kernel.org, 
	xen-devel@...ts.xenproject.org, x86@...nel.org
Subject: Re: [PATCH v4 07/12] mm: enable lazy_mmu sections to nest

Alexander Gordeev <agordeev@...ux.ibm.com> writes:

> On Wed, Nov 05, 2025 at 02:19:03PM +0530, Ritesh Harjani wrote:
>> > + * in_lazy_mmu_mode() can be used to check whether the lazy MMU mode is
>> > + * currently enabled.
>> >   */
>> >  #ifdef CONFIG_ARCH_HAS_LAZY_MMU_MODE
>> >  static inline void lazy_mmu_mode_enable(void)
>> >  {
>> > -	arch_enter_lazy_mmu_mode();
>> > +	struct lazy_mmu_state *state = &current->lazy_mmu_state;
>> > +
>> > +	VM_WARN_ON_ONCE(state->nesting_level == U8_MAX);
>> > +	/* enable() must not be called while paused */
>> > +	VM_WARN_ON(state->nesting_level > 0 && !state->active);
>> > +
>> > +	if (state->nesting_level++ == 0) {
>> > +		state->active = true;
>> > +		arch_enter_lazy_mmu_mode();
>> > +	}
>> >  }
>> 
>> Some architectures disables preemption in their
>> arch_enter_lazy_mmu_mode(). So shouldn't the state->active = true should
>> happen after arch_enter_lazy_mmu_mode() has disabled preemption()? i.e.
>
> Do you have some scenario in mind that could cause an issue?
>
No not really. But that's a deviation from what previous arch hooks were
expecting. Although thinking this through - I don't have any usecase
where this can be a problem. 

But let me re-visit some of the code paths on ppc64 lazy mmu... 

Looking at the arch specific usecase I see we always do get_cpu_var()
for accessing the per-cpu batch array which disables preemption before
accessing the per-cpu structure.. This per-cpu structure is where we
batch pte updates... 

For e.g... 
  
    arch_enter_lazy_mmu_mode()
        hpte_need_flush()
            get_cpu_var()   // this takes care of preempt_disable() 
            adds vpns to per-cpu batch[i]
            put_cpu_var()   // 
    arch_leave_lazy_mmu_mode()

> IOW, what could go wrong if the process is scheduled to another
> CPU before preempt_disable() is called?

So from above - I don't think your sequence to update
   state->active = true 
before calling arch_enter hook should be a problem.
Based on above this looks mostly ok to me.

-ritesh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ