[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANr2M1_9Y9s1jYXOYJDxTtZbnxyc4Xwb2Ask+nZ_eSaZCnCd7A@mail.gmail.com>
Date: Sat, 6 Feb 2021 02:10:56 +0800
From: Lecopzer Chen <lecopzer@...il.com>
To: Will Deacon <will@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Andrey Konovalov <andreyknvl@...gle.com>, ardb@...nel.org,
aryabinin@...tuozzo.com, broonie@...nel.org,
Catalin Marinas <catalin.marinas@....com>,
dan.j.williams@...el.com, Dmitry Vyukov <dvyukov@...gle.com>,
Alexander Potapenko <glider@...gle.com>, gustavoars@...nel.org,
kasan-dev@...glegroups.com,
Jian-Lin Chen <lecopzer.chen@...iatek.com>,
linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-mediatek@...ts.infradead.org, linux-mm@...ck.org,
linux@...ck-us.net, robin.murphy@....com, rppt@...nel.org,
tyhicks@...ux.microsoft.com, vincenzo.frascino@....com,
yj.chiang@...iatek.com
Subject: Re: [PATCH v2 1/4] arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
Will Deacon <will@...nel.org> 於 2021年2月6日 週六 上午1:19寫道:
>
> On Fri, Feb 05, 2021 at 12:37:21AM +0800, Lecopzer Chen wrote:
> >
> > > On Thu, Feb 04, 2021 at 10:46:12PM +0800, Lecopzer Chen wrote:
> > > > > On Sat, Jan 09, 2021 at 06:32:49PM +0800, Lecopzer Chen wrote:
> > > > > > Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > > > > > ("kasan: support backing vmalloc space with real shadow memory")
> > > > > >
> > > > > > Like how the MODULES_VADDR does now, just not to early populate
> > > > > > the VMALLOC_START between VMALLOC_END.
> > > > > > similarly, the kernel code mapping is now in the VMALLOC area and
> > > > > > should keep these area populated.
> > > > > >
> > > > > > Signed-off-by: Lecopzer Chen <lecopzer.chen@...iatek.com>
> > > > > > ---
> > > > > > arch/arm64/mm/kasan_init.c | 23 ++++++++++++++++++-----
> > > > > > 1 file changed, 18 insertions(+), 5 deletions(-)
> > > > > >
> > > > > > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> > > > > > index d8e66c78440e..39b218a64279 100644
> > > > > > --- a/arch/arm64/mm/kasan_init.c
> > > > > > +++ b/arch/arm64/mm/kasan_init.c
> > > > > > @@ -214,6 +214,7 @@ static void __init kasan_init_shadow(void)
> > > > > > {
> > > > > > u64 kimg_shadow_start, kimg_shadow_end;
> > > > > > u64 mod_shadow_start, mod_shadow_end;
> > > > > > + u64 vmalloc_shadow_start, vmalloc_shadow_end;
> > > > > > phys_addr_t pa_start, pa_end;
> > > > > > u64 i;
> > > > > >
> > > > > > @@ -223,6 +224,9 @@ static void __init kasan_init_shadow(void)
> > > > > > mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
> > > > > > mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
> > > > > >
> > > > > > + vmalloc_shadow_start = (u64)kasan_mem_to_shadow((void *)VMALLOC_START);
> > > > > > + vmalloc_shadow_end = (u64)kasan_mem_to_shadow((void *)VMALLOC_END);
> > > > > > +
> > > > > > /*
> > > > > > * We are going to perform proper setup of shadow memory.
> > > > > > * At first we should unmap early shadow (clear_pgds() call below).
> > > > > > @@ -241,12 +245,21 @@ static void __init kasan_init_shadow(void)
> > > > > >
> > > > > > kasan_populate_early_shadow(kasan_mem_to_shadow((void *)PAGE_END),
> > > > > > (void *)mod_shadow_start);
> > > > > > - kasan_populate_early_shadow((void *)kimg_shadow_end,
> > > > > > - (void *)KASAN_SHADOW_END);
> > > > > > + if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
> > > > >
> > > > > Do we really need yet another CONFIG option for KASAN? What's the use-case
> > > > > for *not* enabling this if you're already enabling one of the KASAN
> > > > > backends?
> > > >commit message
> > > > As I know, KASAN_VMALLOC now only supports KASAN_GENERIC and also
> > > > KASAN_VMALLOC uses more memory to map real shadow memory (1/8 of vmalloc va).
> > >
> > > The shadow is allocated dynamically though, isn't it?
> >
> > Yes, but It's still a cost.
> >
> > > > There should be someone can enable KASAN_GENERIC but can't use VMALLOC
> > > > due to memory issue.
> > >
> > > That doesn't sound particularly realistic to me. The reason I'm pushing here
> > > is because I would _really_ like to move to VMAP stack unconditionally, and
> > > that would effectively force KASAN_VMALLOC to be set if KASAN is in use.
> > >
> > > So unless there's a really good reason not to do that, please can we make
> > > this unconditional for arm64? Pretty please?
> >
> > I think it's fine since we have a good reason.
> > Also if someone have memory issue in KASAN_VMALLOC,
> > they can use SW_TAG, right?
> >
> > However the SW_TAG/HW_TAG is not supported VMALLOC yet.
> > So the code would be like
> >
> > if (IS_ENABLED(CONFIG_KASAN_GENERIC))
>
> Just make this CONFIG_KASAN_VMALLOC, since that depends on KASAN_GENERIC.
OK, this also make sense.
My first thought was that selecting KASAN_GENERIC implies VMALLOC in
arm64 is a special case so this need well documented.
I'll document this in the commit message of Kconfig patch to avoid
messing up the code here.
I'm going to send V3 patch, thanks again for your review.
BRs,
Lecopzer
Powered by blists - more mailing lists