[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170517072552.GB18406@js1304-desktop>
Date: Wed, 17 May 2017 16:25:54 +0900
From: Joonsoo Kim <js1304@...il.com>
To: Dmitry Vyukov <dvyukov@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
Alexander Potapenko <glider@...gle.com>,
kasan-dev <kasan-dev@...glegroups.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H . Peter Anvin" <hpa@...or.com>, kernel-team@....com
Subject: Re: [PATCH v1 00/11] mm/kasan: support per-page shadow memory to
reduce memory consumption
On Wed, May 17, 2017 at 04:23:17PM +0900, Joonsoo Kim wrote:
> > However, I see some very significant slowdowns with inline
> > instrumentation. I did 3 tests:
> > 1. Boot speed, I measured time for a particular message to appear on
> > console. Before:
> > [ 2.504652] random: crng init done
> > [ 2.435861] random: crng init done
> > [ 2.537135] random: crng init done
> > After:
> > [ 7.263402] random: crng init done
> > [ 7.263402] random: crng init done
> > [ 7.174395] random: crng init done
> >
> > That's ~3x slowdown.
> >
> > 2. I've run bench_readv benchmark:
> > https://raw.githubusercontent.com/google/sanitizers/master/address-sanitizer/kernel_buildbot/slave/bench_readv.c
> > as:
> > while true; do time ./bench_readv bench_readv 300000 1; done
> >
> > Before:
> > sys 0m7.299s
> > sys 0m7.218s
> > sys 0m6.973s
> > sys 0m6.892s
> > sys 0m7.035s
> > sys 0m6.982s
> > sys 0m6.921s
> > sys 0m6.940s
> > sys 0m6.905s
> > sys 0m7.006s
> >
> > After:
> > sys 0m8.141s
> > sys 0m8.077s
> > sys 0m8.067s
> > sys 0m8.116s
> > sys 0m8.128s
> > sys 0m8.115s
> > sys 0m8.108s
> > sys 0m8.326s
> > sys 0m8.529s
> > sys 0m8.164s
> > sys 0m8.380s
> >
> > This is ~19% slowdown.
> >
> > 3. I've run bench_pipes benchmark:
> > https://raw.githubusercontent.com/google/sanitizers/master/address-sanitizer/kernel_buildbot/slave/bench_pipes.c
> > as:
> > while true; do time ./bench_pipes 10 10000 1; done
> >
> > Before:
> > sys 0m5.393s
> > sys 0m6.178s
> > sys 0m5.909s
> > sys 0m6.024s
> > sys 0m5.874s
> > sys 0m5.737s
> > sys 0m5.826s
> > sys 0m5.664s
> > sys 0m5.758s
> > sys 0m5.421s
> > sys 0m5.444s
> > sys 0m5.479s
> > sys 0m5.461s
> > sys 0m5.417s
> >
> > After:
> > sys 0m8.718s
> > sys 0m8.281s
> > sys 0m8.268s
> > sys 0m8.334s
> > sys 0m8.246s
> > sys 0m8.267s
> > sys 0m8.265s
> > sys 0m8.437s
> > sys 0m8.228s
> > sys 0m8.312s
> > sys 0m8.556s
> > sys 0m8.680s
> >
> > This is ~52% slowdown.
> >
> >
> > This does not look acceptable to me. I would ready to pay for this,
> > say, 10% of performance. But it seems that this can have up to 2-4x
> > slowdown for some workloads.
>
> I found the reasons of above regression. There are two reasons.
>
> 1. In my implementation, original shadow to the memory allocated from
> memblock is black shadow so it causes to call kasan_report(). It will
> pass the check since per page shadow would be zero shadow but it
> causes some overhead.
>
> 2. Memory used by stackdepot is in a similar situation with #1. It
> allocates page and divide it to many objects. Then, use it like as
> object. Although there is "KASAN_SANITIZE_stackdepot.o := n" which try
> to disable sanitizer, there is a function call (memcmp() in
> find_stack()) to other file and sanitizer work for it.
>
> #1 problem can be fixed but more investigation is needed. I will
> respin the series after fixing it.
>
> #2 problem also can be fixed. There are two options here. First, uses
> private memcmp() for stackdepot and disable sanitizer for it. I think
> that this is a right approach since it slowdown the performance in all
> KASAN build cases. And, we don't want to sanitize KASAN itself.
> Second, I can provide a function to map the actual shadow manually. It
> will reduce the case calling kasan_report().
>
> See the attached patch. It implements later approach on #2 problem.
> It would reduce performance regression. I have tested your bench_pipes
> test with it and found that performance is restored. However, there is
> still remaining problem, #1, so I'm not sure that it completely
> restore your regression. Could you check that if possible?
>
Oops... I missed to attach the patch.
Thanks.
--------------------->8-------------------
>From 7798620be07c2c0c7197dfbc1ebeb0b603ab35c7 Mon Sep 17 00:00:00 2001
From: Joonsoo Kim <iamjoonsoo.kim@....com>
Date: Wed, 17 May 2017 15:34:43 +0900
Subject: [PATCH] lib/stackdeopt: use original shadow
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com>
---
lib/stackdepot.c | 7 ++++++-
mm/kasan/kasan.c | 12 ++++++++++++
2 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index f87d138..cc98ce2 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -80,6 +80,8 @@ static int next_slab_inited;
static size_t depot_offset;
static DEFINE_SPINLOCK(depot_lock);
+extern void kasan_map_shadow_private(const void *addr, size_t size, gfp_t flags);
+
static bool init_stack_slab(void **prealloc)
{
if (!*prealloc)
@@ -245,8 +247,11 @@ depot_stack_handle_t depot_save_stack(struct stack_trace *trace,
alloc_flags &= (GFP_ATOMIC | GFP_KERNEL);
alloc_flags |= __GFP_NOWARN;
page = alloc_pages(alloc_flags, STACK_ALLOC_ORDER);
- if (page)
+ if (page) {
prealloc = page_address(page);
+ kasan_map_shadow_private(prealloc,
+ PAGE_SIZE << STACK_ALLOC_ORDER, alloc_flags);
+ }
}
spin_lock_irqsave(&depot_lock, flags);
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index fd6b7d4..3c18d18 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -247,6 +247,18 @@ static int kasan_map_shadow(const void *addr, size_t size, gfp_t flags)
return err;
}
+void kasan_map_shadow_private(const void *addr, size_t size, gfp_t flags)
+{
+ int err;
+
+ err = kasan_map_shadow(addr, size, flags);
+ if (err)
+ return;
+
+ kasan_unpoison_shadow(addr, size);
+ kasan_poison_pshadow(addr, size);
+}
+
static int kasan_unmap_shadow_pte(pte_t *ptep, pgtable_t token,
unsigned long addr, void *data)
{
--
2.7.4
Powered by blists - more mailing lists