lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240904132508.2000743-1-senozhatsky@chromium.org>
Date: Wed,  4 Sep 2024 22:24:52 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Minchan Kim <minchan@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
	Richard Chang <richardycc@...gle.com>,
	linux-kernel@...r.kernel.org,
	Sergey Senozhatsky <senozhatsky@...omium.org>
Subject: [RFC PATCH 0/3] zram: post-processing target selection strategies

Hello,

	Very early RFC, literally started working on it several hours ago.

Problem:
--------
Both recompression and writeback perform a very simple linear scan
of all zram slots in search for post-processing (writeback or
recompress) candidate slots.  This often means that we pick the
worst candidate for pp (post-processing), e.g. a 48 bytes object for
writeback, which is nearly useless, becuase it only releases 48 on
zsmalloc side.

Solution:
---------
This patch reworks the way we select pp targets.  We, quite clearly,
want to sort all the candidates and always pick the largest, be it
recompression or writeback.  Especially for writeback, because the
larger object we writeback the more memory we release.  This series
introduces concept of pp groups and pp scan/selection.

The scan step is a simple iteration over all zram->table entries,
just like what we currently do, but we don't post-process a candidate
slot immediately.  Instead we assign it to a pp group, we have 16 (in
this patch) of them.  PP group is, basically, a list which holds
pp candidate slots that belong to the same size class. 16 pp groups
are 256 bytes apart from each other on a 4K system.

E.g.

	pp group 16: holds candidates of sizes 3840-4096 bytes
	pp group 15: holds candidates of sizes 3584-3840 bytes
	and so on

The select step simply iterates over pp groups from highest to lowest
and picks all candidate slots a particular group contains.  So this
gives us sorted candidates (in linear time) and allows us to select
most optimal (largest) candidates for post-processing first.

NOTE:
The series is in very early stage, I basically just compile-tested
it and ran some initial tests.  It needs more work, but I think this
is the right direction and it all looks quite promising.

Sergey Senozhatsky (3):
  zram: introduce ZRAM_PP_SLOT flag
  zram: rework recompress target selection logic
  zram: rework writeback target selection logic

 drivers/block/zram/zram_drv.c | 287 ++++++++++++++++++++++++++++------
 drivers/block/zram/zram_drv.h |   1 +
 2 files changed, 243 insertions(+), 45 deletions(-)

-- 
2.46.0.469.g59c65b2a67-goog


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ