[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b6f5b3cc-93a0-408a-b7e0-72462f3fd549@redhat.com>
Date: Mon, 3 Nov 2025 17:03:42 +0100
From: David Hildenbrand <david@...hat.com>
To: Kevin Brodsky <kevin.brodsky@....com>, linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org, Alexander Gordeev <agordeev@...ux.ibm.com>,
Andreas Larsson <andreas@...sler.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Boris Ostrovsky <boris.ostrovsky@...cle.com>, Borislav Petkov
<bp@...en8.de>, Catalin Marinas <catalin.marinas@....com>,
Christophe Leroy <christophe.leroy@...roup.eu>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"David S. Miller" <davem@...emloft.net>,
David Woodhouse <dwmw2@...radead.org>, "H. Peter Anvin" <hpa@...or.com>,
Ingo Molnar <mingo@...hat.com>, Jann Horn <jannh@...gle.com>,
Juergen Gross <jgross@...e.com>, "Liam R. Howlett"
<Liam.Howlett@...cle.com>, Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Madhavan Srinivasan <maddy@...ux.ibm.com>,
Michael Ellerman <mpe@...erman.id.au>, Michal Hocko <mhocko@...e.com>,
Mike Rapoport <rppt@...nel.org>, Nicholas Piggin <npiggin@...il.com>,
Peter Zijlstra <peterz@...radead.org>, Ryan Roberts <ryan.roberts@....com>,
Suren Baghdasaryan <surenb@...gle.com>, Thomas Gleixner
<tglx@...utronix.de>, Vlastimil Babka <vbabka@...e.cz>,
Will Deacon <will@...nel.org>, Yeoreum Yun <yeoreum.yun@....com>,
linux-arm-kernel@...ts.infradead.org, linuxppc-dev@...ts.ozlabs.org,
sparclinux@...r.kernel.org, xen-devel@...ts.xenproject.org, x86@...nel.org
Subject: Re: [PATCH v4 08/12] arm64: mm: replace TIF_LAZY_MMU with
in_lazy_mmu_mode()
On 29.10.25 11:09, Kevin Brodsky wrote:
> The generic lazy_mmu layer now tracks whether a task is in lazy MMU
> mode. As a result we no longer need a TIF flag for that purpose -
> let's use the new in_lazy_mmu_mode() helper instead.
>
> Signed-off-by: Kevin Brodsky <kevin.brodsky@....com>
> ---
> arch/arm64/include/asm/pgtable.h | 16 +++-------------
> arch/arm64/include/asm/thread_info.h | 3 +--
> 2 files changed, 4 insertions(+), 15 deletions(-)
>
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 535435248923..61ca88f94551 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -62,30 +62,21 @@ static inline void emit_pte_barriers(void)
>
> static inline void queue_pte_barriers(void)
> {
> - unsigned long flags;
> -
> if (in_interrupt()) {
> emit_pte_barriers();
> return;
> }
>
> - flags = read_thread_flags();
> -
> - if (flags & BIT(TIF_LAZY_MMU)) {
> - /* Avoid the atomic op if already set. */
> - if (!(flags & BIT(TIF_LAZY_MMU_PENDING)))
> - set_thread_flag(TIF_LAZY_MMU_PENDING);
> - } else {
> + if (in_lazy_mmu_mode())
> + test_and_set_thread_flag(TIF_LAZY_MMU_PENDING);
You likely don't want a test_and_set here, which would do a
test_and_set_bit() -- an atomic rmw.
You only want to avoid the atomic write if already set.
So keep the current
/* Avoid the atomic op if already set. */
if (!(flags & BIT(TIF_LAZY_MMU_PENDING)))
set_thread_flag(TIF_LAZY_MMU_PENDING);
--
Cheers
David
Powered by blists - more mailing lists