lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 26 Jul 2022 21:44:40 +0800
From:   Jisheng Zhang <jszhang@...nel.org>
To:     Will Deacon <will@...nel.org>
Cc:     Catalin Marinas <catalin.marinas@....com>,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] arm64: save movk instructions in mov_q when the lower
 16|32 bits are all zero

On Tue, Jul 19, 2022 at 07:13:41PM +0100, Will Deacon wrote:
> On Sat, Jul 09, 2022 at 04:48:30PM +0800, Jisheng Zhang wrote:
> > Currently mov_q is used to move a constant into a 64-bit register,
> > when the lower 16 or 32bits of the constant are all zero, the mov_q
> > emits one or two useless movk instructions. If the mov_q macro is used
> > in hot code path, we want to save the movk instructions as much as
> > possible. For example, when CONFIG_ARM64_MTE is 'Y' and
> > CONFIG_KASAN_HW_TAGS is 'N', the following code in __cpu_setup()
> > routine is the pontential optimization target:
> > 
> >         /* set the TCR_EL1 bits */
> >         mov_q   x10, TCR_MTE_FLAGS
> > 
> > Before the patch:
> > 	mov	x10, #0x10000000000000
> > 	movk	x10, #0x40, lsl #32
> > 	movk	x10, #0x0, lsl #16
> > 	movk	x10, #0x0
> > 
> > After the patch:
> > 	mov	x10, #0x10000000000000
> > 	movk	x10, #0x40, lsl #32
> > 
> > Signed-off-by: Jisheng Zhang <jszhang@...nel.org>
> > ---
> >  arch/arm64/include/asm/assembler.h | 4 ++++
> >  1 file changed, 4 insertions(+)
> > 
> > diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> > index 8c5a61aeaf8e..09f408424cae 100644
> > --- a/arch/arm64/include/asm/assembler.h
> > +++ b/arch/arm64/include/asm/assembler.h
> > @@ -568,9 +568,13 @@ alternative_endif
> >  	movz	\reg, :abs_g3:\val
> >  	movk	\reg, :abs_g2_nc:\val
> >  	.endif
> > +	.if ((((\val) >> 16) & 0xffff) != 0)
> >  	movk	\reg, :abs_g1_nc:\val
> >  	.endif
> > +	.endif
> > +	.if (((\val) & 0xffff) != 0)
> >  	movk	\reg, :abs_g0_nc:\val
> > +	.endif
> 
> Please provide some numbers showing that this is worthwhile.
> 

No, I have no performance numbers, but here are my opnion
about this patch: the two checks doesn't add maintaince effort, its
readability is good, if the two checks can save two movk instructions,
it's worthwhile to add the checks.


Thanks

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ