[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y1dBpLDf+mRH6cLf@google.com>
Date: Tue, 25 Oct 2022 10:53:40 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Alexey Romanov <avromanov@...rdevices.ru>
Cc: minchan@...nel.org, senozhatsky@...omium.org, ngupta@...are.org,
akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, kernel@...rdevices.ru
Subject: Re: [PATCH v1] zram: add size class equals check into recompression
On (22/10/24 15:09), Alexey Romanov wrote:
> It makes no sense for us to recompress the object if it will
> be in the same size class. We anyway don't get any memory gain.
> But, at the same time, we get a CPU time overhead when inserting
> this object into zspage and decompressing it afterwards.
Sounds reasonable.
In my synthetic recompression test I saw only 5 objects that landed
in the same class after recompression; but this, as always, depends
on data patterns and compression algorithms being used.
[..]
> + class_size_prev = zs_get_class_size(zram->mem_pool, comp_len_prev);
> + class_size_next = zs_get_class_size(zram->mem_pool, comp_len_next);
> /*
> * Either a compression error or we failed to compressed the object
> * in a way that will save us memory. Mark the object so that we
> @@ -1663,6 +1667,7 @@ static int zram_recompress(struct zram *zram, u32 index, struct page *page,
> */
> if (comp_len_next >= huge_class_size ||
> comp_len_next >= comp_len_prev ||
> + class_size_next == class_size_prev ||
Let's use >= here, what Andrew has suggested.
> ret) {
> zram_set_flag(zram, index, ZRAM_RECOMP_SKIP);
> zram_clear_flag(zram, index, ZRAM_IDLE);
> diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
> index 2a430e713ce5..75dcbafd5f36 100644
> --- a/include/linux/zsmalloc.h
> +++ b/include/linux/zsmalloc.h
> @@ -56,4 +56,6 @@ unsigned long zs_get_total_pages(struct zs_pool *pool);
> unsigned long zs_compact(struct zs_pool *pool);
[..]
> +/**
> + * zs_get_class_size() - Returns the size (in bytes) of the
> + * zsmalloc &size_class into which the object with specified
> + * size will be inserted or already inserted.
> + *
> + * @pool: zsmalloc pool to use
> + *
> + * Context: Any context.
> + *
> + * Return: the size (in bytes) of the zsmalloc &size_class into which
> + * the object with specified size will be inserted.
> + */
Can't think of a btter way of doing it. On one hand we probably don't want
to expose the object size to class size mapping outside of zsmalloc, but on
the other hand we sort of already do so: zs_huge_class_size().
> +unsigned int zs_get_class_size(struct zs_pool *pool, unsigned int size)
> +{
> + struct size_class *class = pool->size_class[get_size_class_index(size)];
> +
> + return class->size;
> +}
> +EXPORT_SYMBOL_GPL(zs_get_class_size);
I'll kindly ask for v2. This conflicts with configurable zspage order
patch set which I posted last night. get_size_class_index() now takes
the pool parameter.
Powered by blists - more mailing lists