[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y/fyLONTVVjEf22Q@google.com>
Date: Thu, 23 Feb 2023 15:09:32 -0800
From: Minchan Kim <minchan@...nel.org>
To: Sergey Senozhatsky <senozhatsky@...omium.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Yosry Ahmed <yosryahmed@...gle.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCHv2 1/6] zsmalloc: remove insert_zspage() ->inuse
optimization
On Thu, Feb 23, 2023 at 12:04:46PM +0900, Sergey Senozhatsky wrote:
> This optimization has no effect. It only ensures that
> when a page was added to its corresponding fullness
> list, its "inuse" counter was higher or lower than the
> "inuse" counter of the page at the head of the list.
> The intention was to keep busy pages at the head, so
> they could be filled up and moved to the ZS_FULL
> fullness group more quickly. However, this doesn't work
> as the "inuse" counter of a page can be modified by
zspage
Let's use term zspage instead of page to prevent confusing.
> obj_free() but the page may still belong to the same
> fullness list. So, fix_fullness_group() won't change
Yes. I didn't expect it should be perfect from the beginning
but would help just little optimization.
> the page's position in relation to the head's "inuse"
> counter, leading to a largely random order of pages
> within the fullness list.
Good point.
>
> For instance, consider a printout of the "inuse"
> counters of the first 10 pages in a class that holds
> 93 objects per zspage:
>
> ZS_ALMOST_EMPTY: 36 67 68 64 35 54 63 52
>
> As we can see the page with the lowest "inuse" counter
> is actually the head of the fullness list.
Let's write what the patch is doing cleary
"So, let's remove the pointless optimization" or something better word.
>
> Signed-off-by: Sergey Senozhatsky <senozhatsky@...omium.org>
> ---
> mm/zsmalloc.c | 29 ++++++++---------------------
> 1 file changed, 8 insertions(+), 21 deletions(-)
>
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 3aed46ab7e6c..b57a89ed6f30 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -753,37 +753,24 @@ static enum fullness_group get_fullness_group(struct size_class *class,
> }
>
> /*
> - * Each size class maintains various freelists and zspages are assigned
> - * to one of these freelists based on the number of live objects they
> - * have. This functions inserts the given zspage into the freelist
> - * identified by <class, fullness_group>.
> + * This function adds the given zspage to the fullness list identified
> + * by <class, fullness_group>.
> */
> static void insert_zspage(struct size_class *class,
> - struct zspage *zspage,
> - enum fullness_group fullness)
> + struct zspage *zspage,
> + enum fullness_group fullness)
Unnecessary changes
> {
> - struct zspage *head;
> -
> class_stat_inc(class, fullness, 1);
> - head = list_first_entry_or_null(&class->fullness_list[fullness],
> - struct zspage, list);
> - /*
> - * We want to see more ZS_FULL pages and less almost empty/full.
> - * Put pages with higher ->inuse first.
> - */
> - if (head && get_zspage_inuse(zspage) < get_zspage_inuse(head))
> - list_add(&zspage->list, &head->list);
> - else
> - list_add(&zspage->list, &class->fullness_list[fullness]);
> + list_add(&zspage->list, &class->fullness_list[fullness]);
> }
>
> /*
> - * This function removes the given zspage from the freelist identified
> + * This function removes the given zspage from the fullness list identified
> * by <class, fullness_group>.
> */
> static void remove_zspage(struct size_class *class,
> - struct zspage *zspage,
> - enum fullness_group fullness)
> + struct zspage *zspage,
> + enum fullness_group fullness)
Ditto.
Other than that, looks good to me.
Powered by blists - more mailing lists