[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <D04CBA99-3E17-4749-9144-34B6D9D3208E@linux.dev>
Date: Tue, 28 Mar 2023 20:53:21 +0800
From: Muchun Song <muchun.song@...ux.dev>
To: Marco Elver <elver@...gle.com>
Cc: Muchun Song <songmuchun@...edance.com>, glider@...gle.com,
dvyukov@...gle.com, Andrew Morton <akpm@...ux-foundation.org>,
jannh@...gle.com, sjpark@...zon.de, kasan-dev@...glegroups.com,
Linux Memory Management List <linux-mm@...ck.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/6] mm: kfence: simplify kfence pool initialization
> On Mar 28, 2023, at 20:05, Marco Elver <elver@...gle.com> wrote:
>
> On Tue, 28 Mar 2023 at 13:55, Marco Elver <elver@...gle.com> wrote:
>>
>> On Tue, 28 Mar 2023 at 11:58, Muchun Song <songmuchun@...edance.com> wrote:
>>>
>>> There are three similar loops to initialize kfence pool, we could merge
>>> all of them into one loop to simplify the code and make code more
>>> efficient.
>>>
>>> Signed-off-by: Muchun Song <songmuchun@...edance.com>
>>
>> Reviewed-by: Marco Elver <elver@...gle.com>
>>
>>> ---
>>> mm/kfence/core.c | 47 ++++++-----------------------------------------
>>> 1 file changed, 6 insertions(+), 41 deletions(-)
>>>
>>> diff --git a/mm/kfence/core.c b/mm/kfence/core.c
>>> index 7d01a2c76e80..de62a84d4830 100644
>>> --- a/mm/kfence/core.c
>>> +++ b/mm/kfence/core.c
>>> @@ -539,35 +539,10 @@ static void rcu_guarded_free(struct rcu_head *h)
>>> static unsigned long kfence_init_pool(void)
>>> {
>>> unsigned long addr = (unsigned long)__kfence_pool;
>>> - struct page *pages;
>>> int i;
>>>
>>> if (!arch_kfence_init_pool())
>>> return addr;
>>> -
>>> - pages = virt_to_page(__kfence_pool);
>>> -
>>> - /*
>>> - * Set up object pages: they must have PG_slab set, to avoid freeing
>>> - * these as real pages.
>>> - *
>>> - * We also want to avoid inserting kfence_free() in the kfree()
>>> - * fast-path in SLUB, and therefore need to ensure kfree() correctly
>>> - * enters __slab_free() slow-path.
>>> - */
>
> Actually: can you retain this comment somewhere?
Sure, I'll move this to right place.
Thanks.
Powered by blists - more mailing lists