lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 5 Apr 2022 17:10:02 +0200
From:   Andrey Konovalov <andreyknvl@...il.com>
To:     Mark Rutland <mark.rutland@....com>
Cc:     andrey.konovalov@...ux.dev, Marco Elver <elver@...gle.com>,
        Alexander Potapenko <glider@...gle.com>,
        Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Dmitry Vyukov <dvyukov@...gle.com>,
        Andrey Ryabinin <ryabinin.a.a@...il.com>,
        kasan-dev <kasan-dev@...glegroups.com>,
        Vincenzo Frascino <vincenzo.frascino@....com>,
        Sami Tolvanen <samitolvanen@...gle.com>,
        Peter Collingbourne <pcc@...gle.com>,
        Evgenii Stepanov <eugenis@...gle.com>,
        Florian Mayer <fmayer@...gle.com>,
        Linux Memory Management List <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Andrey Konovalov <andreyknvl@...gle.com>
Subject: Re: [PATCH v2 0/4] kasan, arm64, scs, stacktrace: collect stack
 traces from Shadow Call Stack

On Thu, Mar 31, 2022 at 2:39 PM Mark Rutland <mark.rutland@....com> wrote:
>
> I've had a quick look into this, to see what we could do to improve the regular
> unwinder, but I can't reproduce that 30% number.
>
> In local testing the worst can I could get to was 6-13% (with both the
> stacktrace *and* stackdepot logic hacked out entirely).
>
> I'm testing with clang 13.0.0 from the llvm.org binary releases, with defconfig
> + SHADOW_CALL_STACK + KASAN_<option>, using a very recent snapshot of mainline
> (commit d888c83fcec75194a8a48ccd283953bdba7b2550). I'm booting a
> KVM-accelerated QEMU VM on ThunderX2 with "init=/sbin/reboot -- -f" in the
> kernel bootargs, timing the whole run from the outside with "perf stat --null".
>
> The 6% figure is if I count boot as a whole including VM startup and teardown
> (i.e. an under-estimate of the proportion), the 13% figure is if I subtract a
> baseline timing from a run without KASAN (i.e. an over-estimate of the
> proportion).

I think this is the reason for the limited improvement that you
observe. If you measure the time throughout VM startup and teardown,
you include the time required for userspace apps, which is irrelevant.

I measure boot time until a certain point during kernel boot. E.g.,
with the attached config, I measure the time until test_meminit start
running.

It takes 6 seconds for the kernel to reach test_meminit as is, and 4
seconds with kasan_save_stack() commented out. Only commenting out
__stack_depot_save() gives 5.9 seconds, so stack_trace_save() is the
slow part.

> Could you let me know how you're measuring this, and which platform+config
> you're using?

I've attached the config that I use. It's essentially defconfig + SCS
+ KASAN + maybe a few other options.

> I'll have a play with some configs in case there's a pathological
> configuration, but if you could let me know how/what you're testing that'd be a
> great help.

Thanks!

Download attachment ".config" of type "application/octet-stream" (206210 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ