[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YnplFtdEr8dBOvZU@FVFF77S0Q05N>
Date: Tue, 10 May 2022 14:13:58 +0100
From: Mark Rutland <mark.rutland@....com>
To: Alexander Popov <alex.popov@...ux.com>
Cc: linux-arm-kernel@...ts.infradead.org, akpm@...ux-foundation.org,
catalin.marinas@....com, keescook@...omium.org,
linux-kernel@...r.kernel.org, luto@...nel.org, will@...nel.org
Subject: Re: [PATCH v2 07/13] stackleak: rework poison scanning
On Mon, May 09, 2022 at 04:51:35PM +0300, Alexander Popov wrote:
> Hello Mark!
>
> On 27.04.2022 20:31, Mark Rutland wrote:
> > Currently we over-estimate the region of stack which must be erased.
> >
> > To determine the region to be erased, we scan downards for a contiguous
> > block of poison values (or the low bound of the stack). There are a few
> > minor problems with this today:
> >
> > * When we find a block of poison values, we include this block within
> > the region to erase.
> >
> > As this is included within the region to erase, this causes us to
> > redundantly overwrite 'STACKLEAK_SEARCH_DEPTH' (128) bytes with
> > poison.
>
> Right, this can be improved.
>
> > * As the loop condition checks 'poison_count <= depth', it will run an
> > additional iteration after finding the contiguous block of poison,
> > decrementing 'erase_low' once more than necessary.
>
> Actually, I think the current code is correct.
>
> I'm intentionally searching one poison value more than
> STACKLEAK_SEARCH_DEPTH to avoid the corner case. See the BUILD_BUG_ON
> assertion in stackleak_track_stack() that describes it:
>
> /*
> * Having CONFIG_STACKLEAK_TRACK_MIN_SIZE larger than
> * STACKLEAK_SEARCH_DEPTH makes the poison search in
> * stackleak_erase() unreliable. Let's prevent that.
> */
> BUILD_BUG_ON(CONFIG_STACKLEAK_TRACK_MIN_SIZE > STACKLEAK_SEARCH_DEPTH);
I had read that, but as written that doesn't imply that it's necessary to scan
one more element than STACKLEAK_SEARCH_DEPTH, nor why. I'm more than happy to
change the logic, but I think we need a very clear explanation as to why we
need to scan the specific number of bytes we scan, and we should account for
that *within* STACKLEAK_SEARCH_DEPTH for clarity.
> > As this is included within the region to erase, this causes us to
> > redundantly overwrite an additional unsigned long with poison.
> >
> > * As we always decrement 'erase_low' after checking an element on the
> > stack, we always include the element below this within the region to
> > erase.
> >
> > As this is included within the region to erase, this causes us to
> > redundantly overwrite an additional unsigned long with poison.
> >
> > Note that this is not a functional problem. As the loop condition
> > checks 'erase_low > task_stack_low', we'll never clobber the
> > STACK_END_MAGIC. As we always decrement 'erase_low' after this, we'll
> > never fail to erase the element immediately above the STACK_END_MAGIC.
>
> Right, I don't see any bug in the current erasing code.
>
> When I wrote the current code, I carefully checked all the corner cases. For
> example, on the first stack erasing, the STACK_END_MAGIC was not
> overwritten, but the memory next to it was erased. Same for the beginning of
> the stack: I carefully checked that no unpoisoned bytes were left on the
> thread stack.
>
> > In total, this can cause us to erase `128 + 2 * sizeof(unsigned long)`
> > bytes more than necessary, which is unfortunate.
> >
> > This patch reworks the logic to find the address immediately above the
> > poisoned region, by finding the lowest non-poisoned address. This is
> > factored into a stackleak_find_top_of_poison() helper both for clarity
> > and so that this can be shared with the LKDTM test in subsequent
> > patches.
>
> You know, I wrote stackleak_erase() in very plain C. I wanted a compiler to
> generate assembly that is very close to the original asm version. I worried
> that compilers might do weird stuff with the local variables and the stack
> pointer.
>
> So I checked stackleak for gcc versions 4.8, 5, 6, 7, 8, 9, and 10 on
> x86_64, i386 and arm64. This is my project that helped with this work:
> https://github.com/a13xp0p0v/kernel-build-containers
I've used the kernel.org cross toolchains, as published at:
https://mirrors.edge.kernel.org/pub/tools/crosstool/
> Mark, in this patch series you use many local variables and helper functions.
> Honestly, this worries me. For example, compilers can (and usually do)
> ignore the presence of the 'inline' specifier for the purpose of
> optimization.
I've deliberately used `__always_inline` rather than regular `inline` to
prevent code being placed out-of-line. As mentioned in oether replies it has a
stronger semantic.
Thanks,
Mark.
>
> Thanks!
>
> > Signed-off-by: Mark Rutland <mark.rutland@....com>
> > Cc: Alexander Popov <alex.popov@...ux.com>
> > Cc: Andrew Morton <akpm@...ux-foundation.org>
> > Cc: Andy Lutomirski <luto@...nel.org>
> > Cc: Kees Cook <keescook@...omium.org>
> > ---
> > include/linux/stackleak.h | 26 ++++++++++++++++++++++++++
> > kernel/stackleak.c | 18 ++++--------------
> > 2 files changed, 30 insertions(+), 14 deletions(-)
> >
> > diff --git a/include/linux/stackleak.h b/include/linux/stackleak.h
> > index 467661aeb4136..c36e7a3b45e7e 100644
> > --- a/include/linux/stackleak.h
> > +++ b/include/linux/stackleak.h
> > @@ -42,6 +42,32 @@ stackleak_task_high_bound(const struct task_struct *tsk)
> > return (unsigned long)task_pt_regs(tsk);
> > }
> > +/*
> > + * Find the address immediately above the poisoned region of the stack, where
> > + * that region falls between 'low' (inclusive) and 'high' (exclusive).
> > + */
> > +static __always_inline unsigned long
> > +stackleak_find_top_of_poison(const unsigned long low, const unsigned long high)
> > +{
> > + const unsigned int depth = STACKLEAK_SEARCH_DEPTH / sizeof(unsigned long);
> > + unsigned int poison_count = 0;
> > + unsigned long poison_high = high;
> > + unsigned long sp = high;
> > +
> > + while (sp > low && poison_count < depth) {
> > + sp -= sizeof(unsigned long);
> > +
> > + if (*(unsigned long *)sp == STACKLEAK_POISON) {
> > + poison_count++;
> > + } else {
> > + poison_count = 0;
> > + poison_high = sp;
> > + }
> > + }
> > +
> > + return poison_high;
> > +}
> > +
> > static inline void stackleak_task_init(struct task_struct *t)
> > {
> > t->lowest_stack = stackleak_task_low_bound(t);
> > diff --git a/kernel/stackleak.c b/kernel/stackleak.c
> > index ba346d46218f5..afd54b8e10b83 100644
> > --- a/kernel/stackleak.c
> > +++ b/kernel/stackleak.c
> > @@ -74,20 +74,10 @@ static __always_inline void __stackleak_erase(void)
> > {
> > const unsigned long task_stack_low = stackleak_task_low_bound(current);
> > const unsigned long task_stack_high = stackleak_task_high_bound(current);
> > - unsigned long erase_low = current->lowest_stack;
> > - unsigned long erase_high;
> > - unsigned int poison_count = 0;
> > - const unsigned int depth = STACKLEAK_SEARCH_DEPTH / sizeof(unsigned long);
> > -
> > - /* Search for the poison value in the kernel stack */
> > - while (erase_low > task_stack_low && poison_count <= depth) {
> > - if (*(unsigned long *)erase_low == STACKLEAK_POISON)
> > - poison_count++;
> > - else
> > - poison_count = 0;
> > -
> > - erase_low -= sizeof(unsigned long);
> > - }
> > + unsigned long erase_low, erase_high;
> > +
> > + erase_low = stackleak_find_top_of_poison(task_stack_low,
> > + current->lowest_stack);
> > #ifdef CONFIG_STACKLEAK_METRICS
> > current->prev_lowest_stack = erase_low;
>
Powered by blists - more mailing lists