[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y3H0ZWQKPsbPrB85@google.com>
Date: Mon, 14 Nov 2022 16:55:17 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Minchan Kim <minchan@...nel.org>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Nitin Gupta <ngupta@...are.org>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCHv4 0/9] zsmalloc/zram: configurable zspage size
On (22/11/11 09:03), Minchan Kim wrote:
[..]
> Only concern with bigger pages_per_zspage(e.g., 8 or 16) is exhausting memory
> when zram is used for swap. The use case aims to help memory pressure but the
> worst case, the bigger pages_per_zspage, more chance to out of memory.
It's hard to speak in concrete terms here. What locally may look
like a less optimal configuration, can result in a more optimal configuration
globally.
Yes, some zspage_chains get longer, but in return we have very different
clustering and zspool performance/configuration.
Example, a synthetic test on my host.
zspage_chain_size 4
-------------------
zsmalloc classes
class size almost_full almost_empty obj_allocated obj_used pages_used pages_per_zspage freeable
...
Total 13 51 413836 412973 159955 3
zram mm_stat
1691783168 628083717 655175680 0 655175680 60 0 34048 34049
zspage_chain_size 8
-------------------
zsmalloc classes
class size almost_full almost_empty obj_allocated obj_used pages_used pages_per_zspage freeable
...
Total 18 87 414852 412978 156666 0
zram mm_stat
1691803648 627793930 641703936 0 641703936 60 0 33591 33591
Note that we have lower "pages_used" value for the same amount of stored
data. Down to 156666 from 159955 pages.
So it *could be* that longer zspage_chains can be beneficial even in
memory sensitive cases, but we need more data on this, so that we can
speak "statistically".
Powered by blists - more mailing lists