lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240911080130.3766632-1-senozhatsky@chromium.org>
Date: Wed, 11 Sep 2024 17:01:08 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Andrew Morton <akpm@...ux-foundation.org>,
	Minchan Kim <minchan@...nel.org>
Cc: linux-kernel@...r.kernel.org,
	Sergey Senozhatsky <senozhatsky@...omium.org>
Subject: [PATCHv3 0/6] zram: optimal post-processing target selection

Problem:
--------
Both recompression and writeback perform a very simple linear scan
of all zram slots in search for post-processing (writeback or
recompress) candidate slots.  This often means that we pick the
worst candidate for pp (post-processing), e.g. a 48 bytes object for
writeback, which is nearly useless, because it only releases 48
bytes from zsmalloc pool, but consumes an entire 4K slot in the
backing device.  Similarly, recompression of an 48 bytes objects
is unlikely to save more memory that recompression of a 3000 bytes
object.  Both recompression and writeback consume constrained
resources (CPU time, batter, backing device storage space) and
quite often have a (daily) limit on the number of items they
post-process, so we should utilize those constrained resources in
the most optimal way.

Solution:
---------
This patch reworks the way we select pp targets.  We, quite clearly,
want to sort all the candidates and always pick the largest, be it
recompression or writeback.  Especially for writeback, because the
larger object we writeback the more memory we release.  This series
introduces concept of pp buckets and pp scan/selection.

The scan step is a simple iteration over all zram->table entries,
just like what we currently do, but we don't post-process a candidate
slot immediately.  Instead we assign it to a PP (post-processing)
bucket.  PP bucket is, basically, a list which holds pp candidate
slots that belong to the same size class.  PP buckets are 64 bytes
apart, slots are not strictly sorted within a bucket there is a
64 bytes variance.

The select step simply iterates over pp buckets from highest to lowest
and picks all candidate slots a particular buckets contains.  So this
gives us sorted candidates (in linear time) and allows us to select
most optimal (largest) candidates for post-processing first.

v2..v3:
-- select_pp_slot() doesn't list_del() from its bucket now: this
   simplifies error handling
-- permit only one post-processing operation at a time: this takes
   care of race conditions between recompress and writeback
-- do not mark ZRAM_IDLE slots that cannot be ZRAM_IDLE
-- simplify some checks: for example we don't need to check for
   ZRAM_UNDER_WB now when select slots for post-processing, because
   no slot selection will run concurrently with post-processing
-- reshuffle code of zram_free_page()

Sergey Senozhatsky (6):
  zram: introduce ZRAM_PP_SLOT flag
  zram: permit only one post-processing operation at a time
  zram: rework recompress target selection strategy
  zram: rework writeback target selection strategy
  zram: do not mark idle slots that cannot be idle
  zram: reshuffle zram_free_page() flags operations

 Documentation/admin-guide/blockdev/zram.rst |   2 +
 drivers/block/zram/zram_drv.c               | 327 ++++++++++++++++----
 drivers/block/zram/zram_drv.h               |   2 +
 3 files changed, 269 insertions(+), 62 deletions(-)

--
2.46.0.598.g6f2099f65c-goog


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ