lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YnpPIZ/yotlPKwiA@FVFF77S0Q05N>
Date:   Tue, 10 May 2022 12:40:17 +0100
From:   Mark Rutland <mark.rutland@....com>
To:     Alexander Popov <alex.popov@...ux.com>
Cc:     linux-arm-kernel@...ts.infradead.org, akpm@...ux-foundation.org,
        catalin.marinas@....com, keescook@...omium.org,
        linux-kernel@...r.kernel.org, luto@...nel.org, will@...nel.org
Subject: Re: [PATCH v2 02/13] stackleak: move skip_erasing() check earlier

On Sun, May 08, 2022 at 08:44:56PM +0300, Alexander Popov wrote:
> On 27.04.2022 20:31, Mark Rutland wrote:
> > In stackleak_erase() we check skip_erasing() after accessing some fields
> > from current. As generating the address of current uses asm which
> > hazards with the static branch asm, this work is always performed, even
> > when the static branch is patched to jump to the return a the end of the
> > function.
> 
> Nice find!
> 
> > This patch avoids this redundant work by moving the skip_erasing() check
> > earlier.
> > 
> > To avoid complicating initialization within stackleak_erase(), the body
> > of the function is split out into a __stackleak_erase() helper, with the
> > check left in a wrapper function. The __stackleak_erase() helper is
> > marked __always_inline to ensure that this is inlined into
> > stackleak_erase() and not instrumented.

[...]

> > diff --git a/kernel/stackleak.c b/kernel/stackleak.c
> > index ddb5a7f48d69e..753eab797a04d 100644
> > --- a/kernel/stackleak.c
> > +++ b/kernel/stackleak.c
> > @@ -70,7 +70,7 @@ late_initcall(stackleak_sysctls_init);
> >   #define skip_erasing()	false
> >   #endif /* CONFIG_STACKLEAK_RUNTIME_DISABLE */
> > -asmlinkage void noinstr stackleak_erase(void)
> > +static __always_inline void __stackleak_erase(void)
> 
> Are you sure that __stackleak_erase() doesn't need asmlinkage and noinstr as well?

I am certain it needs neither.

It's static and never called from asm, so it doesn't need `asmlinkage`.

It's marked `__always_inline`, so it will always be inlined into its caller (or
if the compiler cannot inline it, will result in a compiler error).

That's important to get good codegen (especially with the on/off stack variants
later in the series), and when inlined into its caller the compiler will treat
it as part of its caller for code generation, so the caller's `noinstr` takes
effect.

Thanks,
Mark.

> 
> >   {
> >   	/* It would be nice not to have 'kstack_ptr' and 'boundary' on stack */
> >   	unsigned long kstack_ptr = current->lowest_stack;
> > @@ -78,9 +78,6 @@ asmlinkage void noinstr stackleak_erase(void)
> >   	unsigned int poison_count = 0;
> >   	const unsigned int depth = STACKLEAK_SEARCH_DEPTH / sizeof(unsigned long);
> > -	if (skip_erasing())
> > -		return;
> > -
> >   	/* Check that 'lowest_stack' value is sane */
> >   	if (unlikely(kstack_ptr - boundary >= THREAD_SIZE))
> >   		kstack_ptr = boundary;
> > @@ -125,6 +122,14 @@ asmlinkage void noinstr stackleak_erase(void)
> >   	current->lowest_stack = current_top_of_stack() - THREAD_SIZE/64;
> >   }
> > +asmlinkage void noinstr stackleak_erase(void)
> > +{
> > +	if (skip_erasing())
> > +		return;
> > +
> > +	__stackleak_erase();
> > +}
> > +
> >   void __used __no_caller_saved_registers noinstr stackleak_track_stack(void)
> >   {
> >   	unsigned long sp = current_stack_pointer;
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@...ts.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ