[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191104124017.GD45140@lakrids.cambridge.arm.com>
Date: Mon, 4 Nov 2019 12:40:18 +0000
From: Mark Rutland <mark.rutland@....com>
To: Sami Tolvanen <samitolvanen@...gle.com>
Cc: Will Deacon <will@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Ard Biesheuvel <ard.biesheuvel@...aro.org>,
Dave Martin <Dave.Martin@....com>,
Kees Cook <keescook@...omium.org>,
Laura Abbott <labbott@...hat.com>,
Marc Zyngier <maz@...nel.org>,
Nick Desaulniers <ndesaulniers@...gle.com>,
Jann Horn <jannh@...gle.com>,
Miguel Ojeda <miguel.ojeda.sandonis@...il.com>,
Masahiro Yamada <yamada.masahiro@...ionext.com>,
clang-built-linux@...glegroups.com,
kernel-hardening@...ts.openwall.com,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 07/17] scs: add support for stack usage debugging
On Fri, Nov 01, 2019 at 03:11:40PM -0700, Sami Tolvanen wrote:
> Implements CONFIG_DEBUG_STACK_USAGE for shadow stacks. When enabled,
> also prints out the highest shadow stack usage per process.
>
> Signed-off-by: Sami Tolvanen <samitolvanen@...gle.com>
> ---
> kernel/scs.c | 39 +++++++++++++++++++++++++++++++++++++++
> 1 file changed, 39 insertions(+)
>
> diff --git a/kernel/scs.c b/kernel/scs.c
> index 7780fc4e29ac..67c43af627d1 100644
> --- a/kernel/scs.c
> +++ b/kernel/scs.c
> @@ -167,6 +167,44 @@ int scs_prepare(struct task_struct *tsk, int node)
> return 0;
> }
>
> +#ifdef CONFIG_DEBUG_STACK_USAGE
> +static inline unsigned long scs_used(struct task_struct *tsk)
> +{
> + unsigned long *p = __scs_base(tsk);
> + unsigned long *end = scs_magic(tsk);
> + uintptr_t s = (uintptr_t)p;
As previously, please use unsigned long for consistency.
> +
> + while (p < end && *p)
> + p++;
I think this is the only place where we legtimately access the shadow
call stack directly. When using SCS and KASAN, are the
compiler-generated accesses to the SCS instrumented?
If not, it might make sense to make this:
while (p < end && READ_ONCE_NOCKECK(*p))
... and poison the allocation from KASAN's PoV, so that we can find
unintentional accesses more easily.
Mark.
> +
> + return (uintptr_t)p - s;
> +}
> +
> +static void scs_check_usage(struct task_struct *tsk)
> +{
> + static DEFINE_SPINLOCK(lock);
> + static unsigned long highest;
> + unsigned long used = scs_used(tsk);
> +
> + if (used <= highest)
> + return;
> +
> + spin_lock(&lock);
> +
> + if (used > highest) {
> + pr_info("%s: highest shadow stack usage %lu bytes\n",
> + __func__, used);
> + highest = used;
> + }
> +
> + spin_unlock(&lock);
> +}
> +#else
> +static inline void scs_check_usage(struct task_struct *tsk)
> +{
> +}
> +#endif
> +
> bool scs_corrupted(struct task_struct *tsk)
> {
> return *scs_magic(tsk) != SCS_END_MAGIC;
> @@ -181,6 +219,7 @@ void scs_release(struct task_struct *tsk)
> return;
>
> WARN_ON(scs_corrupted(tsk));
> + scs_check_usage(tsk);
>
> scs_account(tsk, -1);
> task_set_scs(tsk, NULL);
> --
> 2.24.0.rc1.363.gb1bccd3e3d-goog
>
Powered by blists - more mailing lists