[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d8632edc-5021-4dc8-b75a-3995a710f196@bytedance.com>
Date: Fri, 23 Jun 2023 20:49:05 +0800
From: Qi Zheng <zhengqi.arch@...edance.com>
To: Dave Chinner <david@...morbit.com>
Cc: akpm@...ux-foundation.org, tkhai@...ru, vbabka@...e.cz,
roman.gushchin@...ux.dev, djwong@...nel.org, brauner@...nel.org,
paulmck@...nel.org, tytso@....edu, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, intel-gfx@...ts.freedesktop.org,
dri-devel@...ts.freedesktop.org, linux-arm-msm@...r.kernel.org,
dm-devel@...hat.com, linux-raid@...r.kernel.org,
linux-bcache@...r.kernel.org,
virtualization@...ts.linux-foundation.org,
linux-fsdevel@...r.kernel.org, linux-ext4@...r.kernel.org,
linux-nfs@...r.kernel.org, linux-xfs@...r.kernel.org,
linux-btrfs@...r.kernel.org
Subject: Re: [PATCH 02/29] mm: vmscan: introduce some helpers for dynamically
allocating shrinker
Hi Dave,
On 2023/6/23 14:12, Dave Chinner wrote:
> On Thu, Jun 22, 2023 at 04:53:08PM +0800, Qi Zheng wrote:
>> Introduce some helpers for dynamically allocating shrinker instance,
>> and their uses are as follows:
>>
>> 1. shrinker_alloc_and_init()
>>
>> Used to allocate and initialize a shrinker instance, the priv_data
>> parameter is used to pass the pointer of the previously embedded
>> structure of the shrinker instance.
>>
>> 2. shrinker_free()
>>
>> Used to free the shrinker instance when the registration of shrinker
>> fails.
>>
>> 3. unregister_and_free_shrinker()
>>
>> Used to unregister and free the shrinker instance, and the kfree()
>> will be changed to kfree_rcu() later.
>>
>> Signed-off-by: Qi Zheng <zhengqi.arch@...edance.com>
>> ---
>> include/linux/shrinker.h | 12 ++++++++++++
>> mm/vmscan.c | 35 +++++++++++++++++++++++++++++++++++
>> 2 files changed, 47 insertions(+)
>>
>> diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
>> index 43e6fcabbf51..8e9ba6fa3fcc 100644
>> --- a/include/linux/shrinker.h
>> +++ b/include/linux/shrinker.h
>> @@ -107,6 +107,18 @@ extern void unregister_shrinker(struct shrinker *shrinker);
>> extern void free_prealloced_shrinker(struct shrinker *shrinker);
>> extern void synchronize_shrinkers(void);
>>
>> +typedef unsigned long (*count_objects_cb)(struct shrinker *s,
>> + struct shrink_control *sc);
>> +typedef unsigned long (*scan_objects_cb)(struct shrinker *s,
>> + struct shrink_control *sc);
>> +
>> +struct shrinker *shrinker_alloc_and_init(count_objects_cb count,
>> + scan_objects_cb scan, long batch,
>> + int seeks, unsigned flags,
>> + void *priv_data);
>> +void shrinker_free(struct shrinker *shrinker);
>> +void unregister_and_free_shrinker(struct shrinker *shrinker);
>
> Hmmmm. Not exactly how I envisioned this to be done.
>
> Ok, this will definitely work, but I don't think it is an
> improvement. It's certainly not what I was thinking of when I
> suggested dynamically allocating shrinkers.
>
> The main issue is that this doesn't simplify the API - it expands it
> and creates a minefield of old and new functions that have to be
> used in exactly the right order for the right things to happen.
>
> What I was thinking of was moving the entire shrinker setup code
> over to the prealloc/register_prepared() algorithm, where the setup
> is already separated from the activation of the shrinker.
>
> That is, we start by renaming prealloc_shrinker() to
> shrinker_alloc(), adding a flags field to tell it everything that it
> needs to alloc (i.e. the NUMA/MEMCG_AWARE flags) and having it
> returned a fully allocated shrinker ready to register. Initially
> this also contains an internal flag to say the shrinker was
> allocated so that unregister_shrinker() knows to free it.
>
> The caller then fills out the shrinker functions, seeks, etc. just
> like the do now, and then calls register_shrinker_prepared() to make
> the shrinker active when it wants to turn it on.
>
> When it is time to tear down the shrinker, no API needs to change.
> unregister_shrinker() does all the shutdown and frees all the
> internal memory like it does now. If the shrinker is also marked as
> allocated, it frees the shrinker via RCU, too.
>
> Once everything is converted to this API, we then remove
> register_shrinker(), rename register_shrinker_prepared() to
> shrinker_register(), rename unregister_shrinker to
> shrinker_unregister(), get rid of the internal "allocated" flag
> and always free the shrinker.
IIUC, you mean that we also need to convert the original statically
defined shrinker instances to dynamically allocated.
I think this is a good idea, it helps to simplify the APIs and also
remove special handling for case a and b (mentioned in cover letter).
>
> At the end of the patchset, every shrinker should be set
> up in a manner like this:
>
>
> sb->shrinker = shrinker_alloc(SHRINKER_MEMCG_AWARE|SHRINKER_NUMA_AWARE,
> "sb-%s", type->name);
> if (!sb->shrinker)
> return -ENOMEM;
>
> sb->shrinker->count_objects = super_cache_count;
> sb->shrinker->scan_objects = super_cache_scan;
> sb->shrinker->batch = 1024;
> sb->shrinker->private = sb;
>
> .....
>
> shrinker_register(sb->shrinker);
>
> And teardown is just a call to shrinker_unregister(sb->shrinker)
> as it is now.
>
> i.e. the entire shrinker regsitration API is now just three
> functions, down from the current four, and much simpler than the
> the seven functions this patch set results in...
>
> The other advantage of this is that it will break all the existing
> out of tree code and third party modules using the old API and will
> no longer work with a kernel using lockless slab shrinkers. They
> need to break (both at the source and binary levels) to stop bad
> things from happening due to using uncoverted shrinkers in the new
> setup.
Got it. And totally agree.
I will do it in the v2.
Thanks,
Qi
>
> -Dave.
Powered by blists - more mailing lists