[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZAucxpUG5/6Y4GSL@google.com>
Date: Fri, 10 Mar 2023 13:10:30 -0800
From: Minchan Kim <minchan@...nel.org>
To: Sergey Senozhatsky <senozhatsky@...omium.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Yosry Ahmed <yosryahmed@...gle.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCHv4 0/4] zsmalloc: fine-grained fullness and new compaction
algorithm
On Sat, Mar 04, 2023 at 12:48:31PM +0900, Sergey Senozhatsky wrote:
> Hi,
>
> Existing zsmalloc page fullness grouping leads to suboptimal page
> selection for both zs_malloc() and zs_compact(). This patchset
> reworks zsmalloc fullness grouping/classification.
>
> Additinally it also implements new compaction algorithm that is
> expected to use less CPU-cycles (as it potentially does fewer
> memcpy-s in zs_object_copy()).
>
> Test (synthetic) results can be seen in patch 0003.
>
> v4:
> -- fixed classes stats loop bug (Yosry)
> -- fixed spelling errors (Andrew)
> -- dropped some unnecessary hunks from the patches
>
> v3:
> -- reworked compaction algorithm implementation (Minchan)
> -- keep existing stats and fullness enums (Minchan, Yosry)
> -- dropped the patch with new zsmalloc compaction stats (Minchan)
> -- report per inuse ratio group classes stats
>
> Sergey Senozhatsky (4):
> zsmalloc: remove insert_zspage() ->inuse optimization
> zsmalloc: fine-grained inuse ratio based fullness grouping
> zsmalloc: rework compaction algorithm
> zsmalloc: show per fullness group class stats
>
> mm/zsmalloc.c | 358 ++++++++++++++++++++++++--------------------------
> 1 file changed, 173 insertions(+), 185 deletions(-)
>
> --
Acked-by: Minchan Kim <minchan@...nel.org>
Thanks, Sergey!
Powered by blists - more mailing lists