lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <50d1b63a-88d7-4484-82c0-3bde96e3207d-agordeev@linux.ibm.com>
Date: Wed, 5 Nov 2025 17:12:21 +0100
From: Alexander Gordeev <agordeev@...ux.ibm.com>
To: Ritesh Harjani <ritesh.list@...il.com>
Cc: Kevin Brodsky <kevin.brodsky@....com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, Andreas Larsson <andreas@...sler.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Boris Ostrovsky <boris.ostrovsky@...cle.com>,
        Borislav Petkov <bp@...en8.de>,
        Catalin Marinas <catalin.marinas@....com>,
        Christophe Leroy <christophe.leroy@...roup.eu>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        David Hildenbrand <david@...hat.com>,
        "David S. Miller" <davem@...emloft.net>,
        David Woodhouse <dwmw2@...radead.org>,
        "H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
        Jann Horn <jannh@...gle.com>, Juergen Gross <jgross@...e.com>,
        "Liam R. Howlett" <Liam.Howlett@...cle.com>,
        Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
        Madhavan Srinivasan <maddy@...ux.ibm.com>,
        Michael Ellerman <mpe@...erman.id.au>, Michal Hocko <mhocko@...e.com>,
        Mike Rapoport <rppt@...nel.org>, Nicholas Piggin <npiggin@...il.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Ryan Roberts <ryan.roberts@....com>,
        Suren Baghdasaryan <surenb@...gle.com>,
        Thomas Gleixner <tglx@...utronix.de>, Vlastimil Babka <vbabka@...e.cz>,
        Will Deacon <will@...nel.org>, Yeoreum Yun <yeoreum.yun@....com>,
        linux-arm-kernel@...ts.infradead.org, linuxppc-dev@...ts.ozlabs.org,
        sparclinux@...r.kernel.org, xen-devel@...ts.xenproject.org,
        x86@...nel.org
Subject: Re: [PATCH v4 07/12] mm: enable lazy_mmu sections to nest

On Wed, Nov 05, 2025 at 02:19:03PM +0530, Ritesh Harjani wrote:
> > + * in_lazy_mmu_mode() can be used to check whether the lazy MMU mode is
> > + * currently enabled.
> >   */
> >  #ifdef CONFIG_ARCH_HAS_LAZY_MMU_MODE
> >  static inline void lazy_mmu_mode_enable(void)
> >  {
> > -	arch_enter_lazy_mmu_mode();
> > +	struct lazy_mmu_state *state = &current->lazy_mmu_state;
> > +
> > +	VM_WARN_ON_ONCE(state->nesting_level == U8_MAX);
> > +	/* enable() must not be called while paused */
> > +	VM_WARN_ON(state->nesting_level > 0 && !state->active);
> > +
> > +	if (state->nesting_level++ == 0) {
> > +		state->active = true;
> > +		arch_enter_lazy_mmu_mode();
> > +	}
> >  }
> 
> Some architectures disables preemption in their
> arch_enter_lazy_mmu_mode(). So shouldn't the state->active = true should
> happen after arch_enter_lazy_mmu_mode() has disabled preemption()? i.e.

Do you have some scenario in mind that could cause an issue?
IOW, what could go wrong if the process is scheduled to another
CPU before preempt_disable() is called?

>   static inline void lazy_mmu_mode_enable(void)
>   {
>  -	arch_enter_lazy_mmu_mode();
>  +	struct lazy_mmu_state *state = &current->lazy_mmu_state;
>  +
>  +	VM_WARN_ON_ONCE(state->nesting_level == U8_MAX);
>  +	/* enable() must not be called while paused */
>  +	VM_WARN_ON(state->nesting_level > 0 && !state->active);
>  +
>  +	if (state->nesting_level++ == 0) {
>  +		arch_enter_lazy_mmu_mode();
>  +		state->active = true;
>  +	}
>   }
> 
> ... I think it make more sense to enable the state after the arch_**
> call right.

But then in_lazy_mmu_mode() would return false if called from
arch_enter_lazy_mmu_mode(). Not big problem, but still..

> >  static inline void lazy_mmu_mode_disable(void)
> >  {
> > -	arch_leave_lazy_mmu_mode();
> > +	struct lazy_mmu_state *state = &current->lazy_mmu_state;
> > +
> > +	VM_WARN_ON_ONCE(state->nesting_level == 0);
> > +	VM_WARN_ON(!state->active);
> > +
> > +	if (--state->nesting_level == 0) {
> > +		state->active = false;
> > +		arch_leave_lazy_mmu_mode();
> > +	} else {
> > +		/* Exiting a nested section */
> > +		arch_flush_lazy_mmu_mode();
> > +	}
> >  }
> 
> This looks ok though.
> 
> >  
> >  static inline void lazy_mmu_mode_pause(void)
> >  {
> > +	struct lazy_mmu_state *state = &current->lazy_mmu_state;
> > +
> > +	VM_WARN_ON(state->nesting_level == 0 || !state->active);
> > +
> > +	state->active = false;
> >  	arch_leave_lazy_mmu_mode();
> >  }
> >  
> >  static inline void lazy_mmu_mode_resume(void)
> >  {
> > +	struct lazy_mmu_state *state = &current->lazy_mmu_state;
> > +
> > +	VM_WARN_ON(state->nesting_level == 0 || state->active);
> > +
> > +	state->active = true;
> >  	arch_enter_lazy_mmu_mode();
> >  }
> 
> Ditto.
> 
> -ritesh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ