[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190815112844.GC22153@lakrids.cambridge.arm.com>
Date: Thu, 15 Aug 2019 12:28:44 +0100
From: Mark Rutland <mark.rutland@....com>
To: Daniel Axtens <dja@...ens.net>
Cc: kasan-dev@...glegroups.com, linux-mm@...ck.org, x86@...nel.org,
aryabinin@...tuozzo.com, glider@...gle.com, luto@...nel.org,
linux-kernel@...r.kernel.org, dvyukov@...gle.com,
linuxppc-dev@...ts.ozlabs.org, gor@...ux.ibm.com
Subject: Re: [PATCH v4 0/3] kasan: support backing vmalloc space with real
shadow memory
On Thu, Aug 15, 2019 at 10:16:33AM +1000, Daniel Axtens wrote:
> Currently, vmalloc space is backed by the early shadow page. This
> means that kasan is incompatible with VMAP_STACK, and it also provides
> a hurdle for architectures that do not have a dedicated module space
> (like powerpc64).
>
> This series provides a mechanism to back vmalloc space with real,
> dynamically allocated memory. I have only wired up x86, because that's
> the only currently supported arch I can work with easily, but it's
> very easy to wire up other architectures.
I'm happy to send patches for arm64 once we've settled some conflicting
rework going on for 52-bit VA support.
>
> This has been discussed before in the context of VMAP_STACK:
> - https://bugzilla.kernel.org/show_bug.cgi?id=202009
> - https://lkml.org/lkml/2018/7/22/198
> - https://lkml.org/lkml/2019/7/19/822
>
> In terms of implementation details:
>
> Most mappings in vmalloc space are small, requiring less than a full
> page of shadow space. Allocating a full shadow page per mapping would
> therefore be wasteful. Furthermore, to ensure that different mappings
> use different shadow pages, mappings would have to be aligned to
> KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
>
> Instead, share backing space across multiple mappings. Allocate
> a backing page the first time a mapping in vmalloc space uses a
> particular page of the shadow region. Keep this page around
> regardless of whether the mapping is later freed - in the mean time
> the page could have become shared by another vmalloc mapping.
>
> This can in theory lead to unbounded memory growth, but the vmalloc
> allocator is pretty good at reusing addresses, so the practical memory
> usage appears to grow at first but then stay fairly stable.
>
> If we run into practical memory exhaustion issues, I'm happy to
> consider hooking into the book-keeping that vmap does, but I am not
> convinced that it will be an issue.
FWIW, I haven't spotted such memory exhaustion after a week of Syzkaller
fuzzing with the last patchset, across 3 machines, so that sounds fine
to me.
Otherwise, this looks good to me now! For the x86 and fork patch, feel
free to add:
Acked-by: Mark Rutland <mark.rutland@....com>
Mark.
>
> v1: https://lore.kernel.org/linux-mm/20190725055503.19507-1-dja@axtens.net/
> v2: https://lore.kernel.org/linux-mm/20190729142108.23343-1-dja@axtens.net/
> Address review comments:
> - Patch 1: use kasan_unpoison_shadow's built-in handling of
> ranges that do not align to a full shadow byte
> - Patch 3: prepopulate pgds rather than faulting things in
> v3: https://lore.kernel.org/linux-mm/20190731071550.31814-1-dja@axtens.net/
> Address comments from Mark Rutland:
> - kasan_populate_vmalloc is a better name
> - handle concurrency correctly
> - various nits and cleanups
> - relax module alignment in KASAN_VMALLOC case
> v4: Changes to patch 1 only:
> - Integrate Mark's rework, thanks Mark!
> - handle the case where kasan_populate_shadow might fail
> - poision shadow on free, allowing the alloc path to just
> unpoision memory that it uses
>
> Daniel Axtens (3):
> kasan: support backing vmalloc space with real shadow memory
> fork: support VMAP_STACK with KASAN_VMALLOC
> x86/kasan: support KASAN_VMALLOC
>
> Documentation/dev-tools/kasan.rst | 60 +++++++++++++++++++++++++++
> arch/Kconfig | 9 +++--
> arch/x86/Kconfig | 1 +
> arch/x86/mm/kasan_init_64.c | 61 ++++++++++++++++++++++++++++
> include/linux/kasan.h | 24 +++++++++++
> include/linux/moduleloader.h | 2 +-
> include/linux/vmalloc.h | 12 ++++++
> kernel/fork.c | 4 ++
> lib/Kconfig.kasan | 16 ++++++++
> lib/test_kasan.c | 26 ++++++++++++
> mm/kasan/common.c | 67 +++++++++++++++++++++++++++++++
> mm/kasan/generic_report.c | 3 ++
> mm/kasan/kasan.h | 1 +
> mm/vmalloc.c | 28 ++++++++++++-
> 14 files changed, 308 insertions(+), 6 deletions(-)
>
> --
> 2.20.1
>
Powered by blists - more mailing lists