[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190819101517.GA7482@lakrids.cambridge.arm.com>
Date: Mon, 19 Aug 2019 11:15:18 +0100
From: Mark Rutland <mark.rutland@....com>
To: Andy Lutomirski <luto@...nel.org>
Cc: Christophe Leroy <christophe.leroy@....fr>,
Daniel Axtens <dja@...ens.net>,
kasan-dev <kasan-dev@...glegroups.com>,
Linux-MM <linux-mm@...ck.org>, X86 ML <x86@...nel.org>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
Alexander Potapenko <glider@...gle.com>,
LKML <linux-kernel@...r.kernel.org>,
Dmitry Vyukov <dvyukov@...gle.com>,
linuxppc-dev <linuxppc-dev@...ts.ozlabs.org>,
Vasily Gorbik <gor@...ux.ibm.com>
Subject: Re: [PATCH v4 1/3] kasan: support backing vmalloc space with real
shadow memory
On Fri, Aug 16, 2019 at 10:41:00AM -0700, Andy Lutomirski wrote:
> On Fri, Aug 16, 2019 at 10:08 AM Mark Rutland <mark.rutland@....com> wrote:
> >
> > Hi Christophe,
> >
> > On Fri, Aug 16, 2019 at 09:47:00AM +0200, Christophe Leroy wrote:
> > > Le 15/08/2019 à 02:16, Daniel Axtens a écrit :
> > > > Hook into vmalloc and vmap, and dynamically allocate real shadow
> > > > memory to back the mappings.
> > > >
> > > > Most mappings in vmalloc space are small, requiring less than a full
> > > > page of shadow space. Allocating a full shadow page per mapping would
> > > > therefore be wasteful. Furthermore, to ensure that different mappings
> > > > use different shadow pages, mappings would have to be aligned to
> > > > KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
> > > >
> > > > Instead, share backing space across multiple mappings. Allocate
> > > > a backing page the first time a mapping in vmalloc space uses a
> > > > particular page of the shadow region. Keep this page around
> > > > regardless of whether the mapping is later freed - in the mean time
> > > > the page could have become shared by another vmalloc mapping.
> > > >
> > > > This can in theory lead to unbounded memory growth, but the vmalloc
> > > > allocator is pretty good at reusing addresses, so the practical memory
> > > > usage grows at first but then stays fairly stable.
> > >
> > > I guess people having gigabytes of memory don't mind, but I'm concerned
> > > about tiny targets with very little amount of memory. I have boards with as
> > > little as 32Mbytes of RAM. The shadow region for the linear space already
> > > takes one eighth of the RAM. I'd rather avoid keeping unused shadow pages
> > > busy.
> >
> > I think this depends on how much shadow would be in constant use vs what
> > would get left unused. If the amount in constant use is sufficiently
> > large (or the residue is sufficiently small), then it may not be
> > worthwhile to support KASAN_VMALLOC on such small systems.
> >
> > > Each page of shadow memory represent 8 pages of real memory. Could we use
> > > page_ref to count how many pieces of a shadow page are used so that we can
> > > free it when the ref count decreases to 0.
> > >
> > > > This requires architecture support to actually use: arches must stop
> > > > mapping the read-only zero page over portion of the shadow region that
> > > > covers the vmalloc space and instead leave it unmapped.
> > >
> > > Why 'must' ? Couldn't we switch back and forth from the zero page to real
> > > page on demand ?
> > >
> > > If the zero page is not mapped for unused vmalloc space, bad memory accesses
> > > will Oops on the shadow memory access instead of Oopsing on the real bad
> > > access, making it more difficult to locate and identify the issue.
> >
> > I agree this isn't nice, though FWIW this can already happen today for
> > bad addresses that fall outside of the usual kernel address space. We
> > could make the !KASAN_INLINE checks resilient to this by using
> > probe_kernel_read() to check the shadow, and treating unmapped shadow as
> > poison.
>
> Could we instead modify the page fault handlers to detect this case
> and print a useful message?
In general we can't know if a bad access was a KASAN shadow lookup (e.g.
since the shadow of NULL falls outside of the shadow region), but we
could always print a message using kasan_shadow_to_mem() for any
unhandled fault to suggeest what the "real" address might have been.
Thanks,
Mark.
Powered by blists - more mailing lists