lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aV/bLBPVoUcum9d2@e129823.arm.com>
Date: Thu, 8 Jan 2026 16:28:28 +0000
From: Yeoreum Yun <yeoreum.yun@....com>
To: Will Deacon <will@...nel.org>
Cc: Carl Worth <carl@...amperecomputing.com>,
	Catalin Marinas <catalin.marinas@....com>,
	linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
	Taehyun Noh <taehyun@...xas.edu>, andreyknvl@...il.com,
	pcc@...gle.com
Subject: Re: [PATCH 1/2] arm64: mte: Unify kernel MTE policy and manipulation
 of TCO

Hi,

> On Thu, Oct 30, 2025 at 08:49:31PM -0700, Carl Worth wrote:
> > From: Taehyun Noh <taehyun@...xas.edu>
> >
> > The kernel's primary knob for controlling MTE tag checking is the
> > PSTATE.TCO bit (tag check override). TCO is enabled (which,
> > confusingly _disables_ tag checking) by the hardware at the time of an
> > exception. Then, at various times, when the kernel needs to enable
> > tag-checking it clears TCO, (which in turn allows TCF0 or TCF to
> > control whether tag-checking occurs).
> >
> > Some of the TCO manipulation code has redundancy and confusing naming.
> >
> > Fix the redundancy by introducing a new function user_uses_tagcheck
> > which captures the existing repeated condition. The new function
> > includes significant new comments to help explain the logic.
> >
> > Fix the naming by renaming mte_disable_tco_entry() to
> > set_kernel_mte_policy(). This function does not necessarily disable
> > TCO, but does so only conditionally in the case of KASAN HW TAGS. The
> > new name accurately describes the purpose of the function.
> >
> > This commit should have no behavioral change.
> >
> > Signed-off-by: Taehyun Noh <taehyun@...xas.edu>
> > Co-developed-by: Carl Worth <carl@...amperecomputing.com>
> > Signed-off-by: Carl Worth <carl@...amperecomputing.com>
> > ---
> >  arch/arm64/include/asm/mte.h     | 40 +++++++++++++++++++++++++++++++++-------
> >  arch/arm64/kernel/entry-common.c |  4 ++--
> >  arch/arm64/kernel/mte.c          |  2 +-
> >  3 files changed, 36 insertions(+), 10 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
> > index 3b5069f4683d..70dabc884616 100644
> > --- a/arch/arm64/include/asm/mte.h
> > +++ b/arch/arm64/include/asm/mte.h
> > @@ -224,7 +224,35 @@ static inline bool folio_try_hugetlb_mte_tagging(struct folio *folio)
> >  }
> >  #endif
> >
> > -static inline void mte_disable_tco_entry(struct task_struct *task)
> > +static inline bool user_uses_tagcheck(void)
> > +{
> > +	/*
> > +	 * To decide whether userspace wants tag checking we only look
> > +	 * at TCF0 (SCTLR_EL1.TCF0 bit 0 is set for both synchronous
> > +	 * or asymmetric mode).
> > +	 *
> > +	 * There's an argument that could be made that the kernel
> > +	 * should also consider the state of TCO (tag check override)
> > +	 * since userspace does have the ability to set that as well,
> > +	 * and that could suggest a desire to disable tag checking in
> > +	 * spite of the state of TCF0. However, the Linux kernel has
> > +	 * never historically considered the userspace state of TCO,
> > +	 * (so changing this would be an ABI break), and the hardware
> > +	 * unconditionally sets TCO when an exception occurs
> > +	 * anyway.
> > +	 *
> > +	 * So, again, here we look only at TCF0 and do not consider
> > +	 * TCO.
> > +	 */
> > +	return (current->thread.sctlr_user & (1UL << SCTLR_EL1_TCF0_SHIFT));
> > +}
> > +
> > +/*
> > + * Set the kernel's desired policy for MTE tag checking.
> > + *
> > + * This function should be used right after the kernel entry.
> > + */
> > +static inline void set_kernel_mte_policy(struct task_struct *task)
> >  {
> >  	if (!system_supports_mte())
> >  		return;
> > @@ -232,15 +260,13 @@ static inline void mte_disable_tco_entry(struct task_struct *task)
> >  	/*
> >  	 * Re-enable tag checking (TCO set on exception entry). This is only
> >  	 * necessary if MTE is enabled in either the kernel or the userspace
> > -	 * task in synchronous or asymmetric mode (SCTLR_EL1.TCF0 bit 0 is set
> > -	 * for both). With MTE disabled in the kernel and disabled or
> > -	 * asynchronous in userspace, tag check faults (including in uaccesses)
> > -	 * are not reported, therefore there is no need to re-enable checking.
> > +	 * task. With MTE disabled in the kernel and disabled or asynchronous
> > +	 * in userspace, tag check faults (including in uaccesses) are not
> > +	 * reported, therefore there is no need to re-enable checking.
> >  	 * This is beneficial on microarchitectures where re-enabling TCO is
> >  	 * expensive.
>
> The comment implies that toggling TCO can be expensive, so it's not clear
> to me that moving it to the uaccess routines in the next patch is
> necessarily a good idea in general. I understand that you see improvements
> with memcached, but have you tried exercising workloads that are heavy on
> user accesses?

TBH, I don’t understand why toggling TCO is considered expensive.
PSTATE.TCO is set to 0 by default, and kasan_hw_tags_enabled() only
sets SCTLR_ELx.TCF to a value other than TCF_NONE.

Based on my understanding of the performance results (IIUC),
it appears that mem_access_check_* operations occur even when SCTLR_ELx.TCF == TCF_NONE.
It also seems that the observed performance impact is caused
by an incorrect check of user_uses_tagcheck() in mte_thread_switch()
pointed by Will, which ends up enabling the TCO bit.

If that's true, I think somewhere set PSTATE.TCO as 1 default and
disable TCO bit and properly handle this bit at the enter_from_xxx() and
exit_to_user_mode()...

Am I missing something?

[...]

--
Sincerely,
Yeoreum Yun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ