[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250703180213.28c0e92e@pumpkin>
Date: Thu, 3 Jul 2025 18:02:13 +0100
From: David Laight <david.laight.linux@...il.com>
To: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Cc: Andy Lutomirski <luto@...nel.org>, Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, Dave Hansen
<dave.hansen@...ux.intel.com>, x86@...nel.org, "H. Peter Anvin"
<hpa@...or.com>, Peter Zijlstra <peterz@...radead.org>, Ard Biesheuvel
<ardb@...nel.org>, "Paul E. McKenney" <paulmck@...nel.org>, Josh Poimboeuf
<jpoimboe@...nel.org>, Xiongwei Song <xiongwei.song@...driver.com>, Xin Li
<xin3.li@...el.com>, "Mike Rapoport (IBM)" <rppt@...nel.org>, Brijesh Singh
<brijesh.singh@....com>, Michael Roth <michael.roth@....com>, Tony Luck
<tony.luck@...el.com>, Alexey Kardashevskiy <aik@....com>, Alexander
Shishkin <alexander.shishkin@...ux.intel.com>, Jonathan Corbet
<corbet@....net>, Sohil Mehta <sohil.mehta@...el.com>, Ingo Molnar
<mingo@...nel.org>, Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>, Daniel
Sneddon <daniel.sneddon@...ux.intel.com>, Kai Huang <kai.huang@...el.com>,
Sandipan Das <sandipan.das@....com>, Breno Leitao <leitao@...ian.org>, Rick
Edgecombe <rick.p.edgecombe@...el.com>, Alexei Starovoitov
<ast@...nel.org>, Hou Tao <houtao1@...wei.com>, Juergen Gross
<jgross@...e.com>, Vegard Nossum <vegard.nossum@...cle.com>, Kees Cook
<kees@...nel.org>, Eric Biggers <ebiggers@...gle.com>, Jason Gunthorpe
<jgg@...pe.ca>, "Masami Hiramatsu (Google)" <mhiramat@...nel.org>, Andrew
Morton <akpm@...ux-foundation.org>, Luis Chamberlain <mcgrof@...nel.org>,
Yuntao Wang <ytcoode@...il.com>, Rasmus Villemoes
<linux@...musvillemoes.dk>, Christophe Leroy <christophe.leroy@...roup.eu>,
Tejun Heo <tj@...nel.org>, Changbin Du <changbin.du@...wei.com>, Huang
Shijie <shijie@...amperecomputing.com>, Geert Uytterhoeven
<geert+renesas@...der.be>, Namhyung Kim <namhyung@...nel.org>, Arnaldo
Carvalho de Melo <acme@...hat.com>, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-efi@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCHv8 02/17] x86/asm: Introduce inline memcpy and memset
On Thu, 3 Jul 2025 17:10:34 +0300
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com> wrote:
> On Thu, Jul 03, 2025 at 01:15:52PM +0100, David Laight wrote:
> > On Thu, 3 Jul 2025 13:39:57 +0300
> > "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com> wrote:
> >
> > > On Thu, Jul 03, 2025 at 09:44:17AM +0100, David Laight wrote:
> > > > On Tue, 1 Jul 2025 12:58:31 +0300
> > > > "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com> wrote:
> > > >
> > > > > Extract memcpy and memset functions from copy_user_generic() and
> > > > > __clear_user().
> > > > >
> > > > > They can be used as inline memcpy and memset instead of the GCC builtins
> > > > > whenever necessary. LASS requires them to handle text_poke.
> > > >
> > > > Except they contain the fault handlers so aren't generic calls.
> > >
> > > That's true. I will add a comment to clarify it.
> >
> > They need renaming.
>
> __inline_memcpy/memset_safe()?
'safe' against what :-)
They can't be used for user accesses without access_ok() and clac.
The get/put_user variants without access_ok() have _unsafe() suffix.
>
> > ...
> > > > > diff --git a/arch/x86/lib/clear_page_64.S b/arch/x86/lib/clear_page_64.S
> > > > > index a508e4a8c66a..47b613690f84 100644
> > > > > --- a/arch/x86/lib/clear_page_64.S
> > > > > +++ b/arch/x86/lib/clear_page_64.S
> > > > > @@ -55,17 +55,26 @@ SYM_FUNC_END(clear_page_erms)
> > > > > EXPORT_SYMBOL_GPL(clear_page_erms)
> > > > >
> > > > > /*
> > > > > - * Default clear user-space.
> > > > > + * Default memset.
> > > > > * Input:
> > > > > * rdi destination
> > > > > + * rsi scratch
> > > > > * rcx count
> > > > > - * rax is zero
> > > > > + * al is value
> > > > > *
> > > > > * Output:
> > > > > * rcx: uncleared bytes or 0 if successful.
> > > > > + * rdx: clobbered
> > > > > */
> > > > > SYM_FUNC_START(rep_stos_alternative)
> > > > > ANNOTATE_NOENDBR
> > > > > +
> > > > > + movzbq %al, %rsi
> > > > > + movabs $0x0101010101010101, %rax
> > > > > +
> > > > > + /* RDX:RAX = RAX * RSI */
> > > > > + mulq %rsi
> > > >
> > > > NAK - you can't do that here.
> > > > Neither %rsi nor %rdx can be trashed.
> > > > The function has a very explicit calling convention.
> > >
> > > What calling convention? We change the only caller to confirm to this.
> >
> > The one that is implicit in:
> >
> > > > > + asm volatile("1:\n\t"
> > > > > + ALT_64("rep stosb",
> > > > > + "call rep_stos_alternative", ALT_NOT(X86_FEATURE_FSRM))
> > > > > + "2:\n\t"
> > > > > + _ASM_EXTABLE_UA(1b, 2b)
> > > > > + : "+c" (len), "+D" (addr), ASM_CALL_CONSTRAINT
> > > > > + : "a" ((uint8_t)v)
> >
> > The called function is only allowed to change the registers that
> > 'rep stosb' uses - except it can access (but not change)
> > all of %rax - not just %al.
> >
> > See: https://godbolt.org/z/3fnrT3x9r
> > In particular note that 'do_mset' must not change %rax.
> >
> > This is very specific and is done so that the compiler can use
> > all the registers.
>
> Okay, I see what you are saying.
>
> > > > It is also almost certainly a waste of time.
> > > > Pretty much all the calls will be for a constant 0x00.
> > > > Rename it all memzero() ...
> > >
> > > text_poke_memset() is not limited to zeroing.
> >
> > But you don't want the overhead of extending the constant
> > on all the calls - never mind reserving %rdx to do it.
> > Maybe define a function that requires the caller to have
> > done the 'dirty work' - so any code that wants memzero()
> > just passes zero.
> > Or do the multiply in the C code where it will get optimised
> > away for constant zero.
> > You do get the multiply for the 'rep stosb' case - but that
> > is always going to be true unless you complicate things further.
>
> The patch below seems to do the trick: compiler optimizes out the
> multiplication for v == 0.
>
> It would be nice to avoid it for X86_FEATURE_FSRM, but we cannot use
> cpu_feature_enabled() here as <asm/cpufeature.h> depends on
> <asm/string.h>.
>
> I cannot say I like the result.
>
> Any suggestions?
>
> diff --git a/arch/x86/include/asm/string.h b/arch/x86/include/asm/string.h
> index becb9ee3bc8a..c7644a6f426b 100644
> --- a/arch/x86/include/asm/string.h
> +++ b/arch/x86/include/asm/string.h
> @@ -35,16 +35,27 @@ static __always_inline void *__inline_memcpy(void *to, const void *from, size_t
>
> static __always_inline void *__inline_memset(void *addr, int v, size_t len)
> {
> + unsigned long val = v;
> void *ret = addr;
>
> + if (IS_ENABLED(CONFIG_X86_64)) {
> + /*
> + * Fill all bytes by value in byte 0.
> + *
> + * To be used in rep_stos_alternative()i
> + */
> + val &= 0xff;
> + val *= 0x0101010101010101;
> + }
That won't compile for 32bit, and it needs the same thing done.
val *= (unsigned long)0x0101010101010101ull;
should work.
I don't think you need the 'val &= 0xff', just rely on the caller
passing a valid value - nothing will break badly if it doesn't.
David
> +
> asm volatile("1:\n\t"
> ALT_64("rep stosb",
> "call rep_stos_alternative", ALT_NOT(X86_FEATURE_FSRM))
> "2:\n\t"
> _ASM_EXTABLE_UA(1b, 2b)
> : "+c" (len), "+D" (addr), ASM_CALL_CONSTRAINT
> - : "a" (v)
> - : "memory", _ASM_SI, _ASM_DX);
> + : "a" (val)
> + : "memory");
>
> return ret + len;
> }
> diff --git a/arch/x86/lib/clear_page_64.S b/arch/x86/lib/clear_page_64.S
> index 47b613690f84..3ef7d796deb3 100644
> --- a/arch/x86/lib/clear_page_64.S
> +++ b/arch/x86/lib/clear_page_64.S
> @@ -58,23 +58,15 @@ EXPORT_SYMBOL_GPL(clear_page_erms)
> * Default memset.
> * Input:
> * rdi destination
> - * rsi scratch
> * rcx count
> * al is value
> *
> * Output:
> * rcx: uncleared bytes or 0 if successful.
> - * rdx: clobbered
> */
> SYM_FUNC_START(rep_stos_alternative)
> ANNOTATE_NOENDBR
>
> - movzbq %al, %rsi
> - movabs $0x0101010101010101, %rax
> -
> - /* RDX:RAX = RAX * RSI */
> - mulq %rsi
> -
> cmpq $64,%rcx
> jae .Lunrolled
>
Powered by blists - more mailing lists