[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANpmjNN-jsZoVmJWD2Dz6O3_YVjy0av6e0iD-+OYXpik1LbLvg@mail.gmail.com>
Date: Thu, 23 Jun 2022 13:59:59 +0200
From: Marco Elver <elver@...gle.com>
To: yee.lee@...iatek.com
Cc: linux-kernel@...r.kernel.org,
Alexander Potapenko <glider@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Matthias Brugger <matthias.bgg@...il.com>,
"open list:KFENCE" <kasan-dev@...glegroups.com>,
"open list:MEMORY MANAGEMENT" <linux-mm@...ck.org>,
"moderated list:ARM/Mediatek SoC support"
<linux-arm-kernel@...ts.infradead.org>,
"moderated list:ARM/Mediatek SoC support"
<linux-mediatek@...ts.infradead.org>
Subject: Re: [PATCH 1/1] mm: kfence: skip kmemleak alloc in kfence_pool
On Thu, 23 Jun 2022 at 13:20, yee.lee via kasan-dev
<kasan-dev@...glegroups.com> wrote:
>
> From: Yee Lee <yee.lee@...iatek.com>
>
> Use MEMBLOCK_ALLOC_NOLEAKTRACE to skip kmemleak registration when
> the kfence pool is allocated from memblock. And the kmemleak_free
> later can be removed too.
Is this purely meant to be a cleanup and non-functional change?
> Signed-off-by: Yee Lee <yee.lee@...iatek.com>
>
> ---
> mm/kfence/core.c | 18 ++++++++----------
> 1 file changed, 8 insertions(+), 10 deletions(-)
>
> diff --git a/mm/kfence/core.c b/mm/kfence/core.c
> index 4e7cd4c8e687..0d33d83f5244 100644
> --- a/mm/kfence/core.c
> +++ b/mm/kfence/core.c
> @@ -600,14 +600,6 @@ static unsigned long kfence_init_pool(void)
> addr += 2 * PAGE_SIZE;
> }
>
> - /*
> - * The pool is live and will never be deallocated from this point on.
> - * Remove the pool object from the kmemleak object tree, as it would
> - * otherwise overlap with allocations returned by kfence_alloc(), which
> - * are registered with kmemleak through the slab post-alloc hook.
> - */
> - kmemleak_free(__kfence_pool);
This appears to only be a non-functional change if the pool is
allocated early. If the pool is allocated late using page-alloc, then
there'll not be a kmemleak_free() on that memory and we'll have the
same problem.
Powered by blists - more mailing lists