[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAG48ez32X1WKryh5ueQ0=Mn=PMKc6zunOYsMHhwMMMxKKaMfqA@mail.gmail.com>
Date: Wed, 25 Jan 2023 10:27:25 +0100
From: Jann Horn <jannh@...gle.com>
To: Dmitry Vyukov <dvyukov@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
Uladzislau Rezki <urezki@...il.com>,
Christoph Hellwig <hch@...radead.org>,
Andy Lutomirski <luto@...nel.org>,
linux-kernel@...r.kernel.org,
Andrey Ryabinin <ryabinin.a.a@...il.com>,
Alexander Potapenko <glider@...gle.com>,
Andrey Konovalov <andreyknvl@...il.com>,
Vincenzo Frascino <vincenzo.frascino@....com>,
kasan-dev@...glegroups.com
Subject: Re: [PATCH] fork, vmalloc: KASAN-poison backing pages of vmapped stacks
On Wed, Jan 18, 2023 at 8:36 AM Dmitry Vyukov <dvyukov@...gle.com> wrote:
> On Tue, 17 Jan 2023 at 17:35, Jann Horn <jannh@...gle.com> wrote:
> >
> > KASAN (except in HW_TAGS mode) tracks memory state based on virtual
> > addresses. The mappings of kernel stack pages in the linear mapping are
> > currently marked as fully accessible.
>
> Hi Jann,
>
> To confirm my understanding, this is not just KASAN (except in HW_TAGS
> mode), but also CONFIG_VMAP_STACK is required, right?
Yes.
> > Since stack corruption issues can cause some very gnarly errors, let's be
> > extra careful and tell KASAN to forbid accesses to stack memory through the
> > linear mapping.
> >
> > Signed-off-by: Jann Horn <jannh@...gle.com>
> > ---
> > I wrote this after seeing
> > https://lore.kernel.org/all/Y8W5rjKdZ9erIF14@casper.infradead.org/
> > and wondering about possible ways that this kind of stack corruption
> > could be sneaking past KASAN.
> > That's proooobably not the explanation, but still...
>
> I think catching any silent corruptions is still very useful. Besides
> confusing reports, sometimes they lead to an explosion of random
> reports all over the kernel.
>
> > include/linux/vmalloc.h | 6 ++++++
> > kernel/fork.c | 10 ++++++++++
> > mm/vmalloc.c | 24 ++++++++++++++++++++++++
> > 3 files changed, 40 insertions(+)
> >
> > diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> > index 096d48aa3437..bfb50178e5e3 100644
> > --- a/include/linux/vmalloc.h
> > +++ b/include/linux/vmalloc.h
> > @@ -297,4 +297,10 @@ bool vmalloc_dump_obj(void *object);
> > static inline bool vmalloc_dump_obj(void *object) { return false; }
> > #endif
> >
> > +#if defined(CONFIG_MMU) && (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS))
> > +void vmalloc_poison_backing_pages(const void *addr);
> > +#else
> > +static inline void vmalloc_poison_backing_pages(const void *addr) {}
> > +#endif
>
> I think this should be in kasan headers and prefixed with kasan_.
> There are also kmsan/kcsan that may poison memory and hw poisoning
> (MADV_HWPOISON), so it's a somewhat overloaded term on its own.
>
> Can/should this be extended to all vmalloc-ed memory? Or some of it
> can be accessed via both addresses?
I think anything that does vmalloc_to_page() has a high chance of
doing accesses via both addresses, in particular anything involving
DMA.
Oooh, actually, there is some CIFS code that does vmalloc_to_page()
and talks about stack memory... I'll report that over on the other
thread re CIFS weirdness.
> Also, should we mprotect it instead while it's allocated as the stack?
> If it works, it looks like a reasonable improvement for
> CONFIG_VMAP_STACK in general. Would also catch non-instrumented
> accesses.
Well, we could also put it under CONFIG_DEBUG_PAGEALLOC and then use
the debug_pagealloc_map_pages() / debug_pagealloc_unmap_pages()
facilities to remove the page table entries. But I don't know if
anyone actually runs fuzzing with CONFIG_DEBUG_PAGEALLOC.
Powered by blists - more mailing lists