[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YuKokvBjDxATePpH@xhacker>
Date: Thu, 28 Jul 2022 23:17:38 +0800
From: Jisheng Zhang <jszhang@...nel.org>
To: Ard Biesheuvel <ardb@...nel.org>
Cc: Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Linux ARM <linux-arm-kernel@...ts.infradead.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] arm64: save movk instructions in mov_q when the lower
16|32 bits are all zero
On Thu, Jul 28, 2022 at 10:49:02PM +0800, Jisheng Zhang wrote:
> On Wed, Jul 27, 2022 at 08:15:11AM -0700, Ard Biesheuvel wrote:
> > On Sat, 9 Jul 2022 at 01:58, Jisheng Zhang <jszhang@...nel.org> wrote:
> > >
> > > Currently mov_q is used to move a constant into a 64-bit register,
> > > when the lower 16 or 32bits of the constant are all zero, the mov_q
> > > emits one or two useless movk instructions. If the mov_q macro is used
> > > in hot code path, we want to save the movk instructions as much as
> > > possible. For example, when CONFIG_ARM64_MTE is 'Y' and
> > > CONFIG_KASAN_HW_TAGS is 'N', the following code in __cpu_setup()
> > > routine is the pontential optimization target:
> > >
> > > /* set the TCR_EL1 bits */
> > > mov_q x10, TCR_MTE_FLAGS
> > >
> > > Before the patch:
> > > mov x10, #0x10000000000000
> > > movk x10, #0x40, lsl #32
> > > movk x10, #0x0, lsl #16
> > > movk x10, #0x0
> > >
> > > After the patch:
> > > mov x10, #0x10000000000000
> > > movk x10, #0x40, lsl #32
> > >
> > > Signed-off-by: Jisheng Zhang <jszhang@...nel.org>
> >
> > This is broken for constants that have 0xffff in the top 16 bits, as
> > in that case, we will emit a MOVN/MOVK/MOVK sequence, and omitting the
> > MOVKs will set the corresponding field to 0xffff not 0x0.
>
> Thanks so much for this hint. I think you are right about the 0xffff in
> top 16bits case.
>
the patch breaks below usage case:
mov_q x0, 0xffffffff00000000
I think the reason is mov_q starts from high bits, if we change the
macro to start from LSB, then that could solve the breakage. But this
needs a rewrite of mov_q
Powered by blists - more mailing lists