lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y2PzseskzPelrZum@google.com>
Date:   Thu, 3 Nov 2022 10:00:33 -0700
From:   Minchan Kim <minchan@...nel.org>
To:     Sergey Senozhatsky <senozhatsky@...omium.org>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Nitin Gupta <ngupta@...are.org>, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org
Subject: Re: [PATCHv4 4/9] zram: Introduce recompress sysfs knob

On Thu, Nov 03, 2022 at 12:25:43PM +0900, Sergey Senozhatsky wrote:
> On (22/11/02 14:06), Minchan Kim wrote:
> > On Tue, Oct 18, 2022 at 01:55:28PM +0900, Sergey Senozhatsky wrote:
> > > Allow zram to recompress (using secondary compression streams)
> > > pages. We support three modes:
> > > 
> > > 1) IDLE pages recompression is activated by `idle` mode
> > > 
> > > 	echo idle > /sys/block/zram0/recompress
> > > 
> > > 2) Since there may be many idle pages user-space may pass a size
> > > watermark value (in bytes) and we will recompress IDLE pages only
> > > of equal or greater size:
> > > 
> > > 	echo 888 > /sys/block/zram0/recompress
> > 
> > Hmm, how about having seperate knob for threshold?
> 
> Per-my understanding this threshold can change quite often,
> depending on memory pressure and so on. So we may force
> user-space to issues more syscalls, without any gain in
> simplicity.

Sorry, didn't understand your point. Let me clarify my idea.
If we have separate knob for recompress thresh hold, we could
work like this.

# recompress any compressed pages which is greater than 888 bytes.
echo 888 > /sys/block/zram0/recompress_threshold

# try to compress any pages greather than threshold with following
# algorithm.

echo "type=lzo priority=1" > /sys/block/zram0/recompress_algo
echo "type=zstd priority=2" > /sys/block/zram0/recompress_algo
echo "type=deflate priority=3" > /sys/block/zram0/recompress_algo

> 
> > recompress_threshold
> > 
> > With that, we could make rescompress 888 and idle/huge
> > as well as only 888.
> > 
> >   echo 888 > /sys/block/zram0/recompress_threshold
> >   echo 1 > /sys/block/zram0/recompress
> > 
> >   or
> > 
> >   echo 888 > /sys/block/zram0/recompress_threshold
> >   echo idle > /sys/block/zram0/recompress
> > 
> > or we can introduce the threshold with action item.
> >   
> >   echo "idle 888" > /sys/block/zram0/recompress
> >   echo "huge 888" > /sys/block/zram0/recompress
> >   echo "normal 888" > /sys/block/zram0/recompress
> 
> I like the latter one, when threshold is an optional argument.
> I probably would even go a bit further and add keywords:
> 
> 	type=STRING threshold=INT

Yeah, kerwords would be better. Let's discuss we need a threshold
optional argument for each algo or just have one threshold
for every secondary algorithm or both.

> 
> > > 3) HUGE pages recompression is activated by `huge` mode
> > > 
> > > 	echo huge > /sys/block/zram0/recompress
> > > 
> > > 4) HUGE_IDLE pages recompression is activated by `huge_idle` mode
> > > 
> > > 	echo huge_idle > /sys/block/zram0/recompress
> > > 
> > > Signed-off-by: Sergey Senozhatsky <senozhatsky@...omium.org>
> > > ---
> > >  drivers/block/zram/Kconfig    |  15 +++
> > >  drivers/block/zram/zram_drv.c | 196 +++++++++++++++++++++++++++++++++-
> > >  drivers/block/zram/zram_drv.h |   2 +
> > >  3 files changed, 210 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/drivers/block/zram/Kconfig b/drivers/block/zram/Kconfig
> > > index d4100b0c083e..3e00656a6f8a 100644
> > > --- a/drivers/block/zram/Kconfig
> > > +++ b/drivers/block/zram/Kconfig
> > > @@ -78,3 +78,18 @@ config ZRAM_MEMORY_TRACKING
> > >  	  /sys/kernel/debug/zram/zramX/block_state.
> > >  
> > >  	  See Documentation/admin-guide/blockdev/zram.rst for more information.
> > > +
> > > +config ZRAM_MULTI_COMP
> > > +	bool "Enable multiple per-CPU compression streams"
> > 
> > per-CPU is implementation detail. Let's do not mention it.
> 
> OK.
> 
> > > +	depends on ZRAM
> > > +	help
> > > +	This will enable per-CPU multi-compression streams, so that ZRAM
> > 
> >       indentation
> 
> OK. A question: does this matter? I don't see any problems in menuconfig.
> 
> > > +	can re-compress IDLE/huge pages, using a potentially slower but
> > > +	more effective compression algorithm. Note, that IDLE page support
> > > +	requires ZRAM_MEMORY_TRACKING.
> > > +
> > > +          echo TIMEOUT > /sys/block/zramX/idle
> > > +          echo SIZE > /sys/block/zramX/recompress
> > > +
> > > +	SIZE (in bytes) parameter sets the object size watermark: idle
> > > +	objects that are of a smaller size will not get recompressed.
> > > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> > > index 94c62d7ea818..da11560ecf70 100644
> > > --- a/drivers/block/zram/zram_drv.c
> > > +++ b/drivers/block/zram/zram_drv.c
> > > @@ -1282,6 +1282,12 @@ static void zram_free_page(struct zram *zram, size_t index)
> > >  		atomic64_dec(&zram->stats.huge_pages);
> > >  	}
> > >  
> > > +	if (zram_test_flag(zram, index, ZRAM_RECOMP))
> > > +		zram_clear_flag(zram, index, ZRAM_RECOMP);
> > > +
> > > +	if (zram_test_flag(zram, index, ZRAM_RECOMP_SKIP))
> > > +		zram_clear_flag(zram, index, ZRAM_RECOMP_SKIP);
> > 
> > Let's squeeze the comp algo index into meta area since we have
> > some rooms for the bits. Then can we could remove the specific
> > recomp two flags?
> 
> What is meta area?

zram->table[index].flags

If we squeeze the algorithm index, we could work like this
without ZRAM_RECOMP_SKIP.

read_block_state
    zram_algo_idx(zram, index) > 0 ? 'r' : '.');

zram_read_from_zpool
    if (zram_algo_idx(zram, idx) != 0)
        idx = 1;

zram_recompress
    ..
    we don't need to set ZRAM_RECOMP since every meta will have the algo
    idx.

zram_free_page
    zram_clear_algo(zram, index);

recompress_store
    int algo_idx = zram_algo_idx(zram, index);

    if (algo_idx == max_algo_idx)
        goto next

> 
> > I am thinking the feature to work incoming pages on the fly,
> > not only for recompress manual knob so it would be good
> > if we could make the interface abstract to work something
> > like this(I may miss something why we couldn't. I need more
> > time to look into then)
> >
> > zram_bvec_write:
> > 
> >     for (i = 0; i < MAX_COMP_ALGO; i++) {
> >         zstrm = zcomp_stream_get(i);
> >         zcomp_compress(src, &comp_len);
> >         if (comp_len > threshold) {
> >             zcomp_stream_put(i);
> >             continue;
> >         }
> >         break;
> >     }
> > 
> > zram_bvec_read:
> >     algo_idx = zram_get_algo_idx(zram, index);
> >     zstrm = zcomp_stream_get(zram, algo_idx);
> >     zcomp_decompress(zstrm);
> >     zcomp_stream_put(zram, algo_idx);
> 
> Hmm. This is something that should not be enabled by default.

Exactly. I don't mean to enable by default, either.

> N compressions per every stored page is very very CPU and
> power intensive. We definitely want a way to have recompression
> as a user-space event, which gives all sorts of flexibility and
> extensibility. ZRAM doesn't (and should not) know about too many
> things, so ZRAM can't make good decisions (and probably should not
> try). User-space can make good decisions on the other hand.
> 
> So recompression for us is not something that happens all the time,
> unconditionally. It's something that happens sometimes, depending on
> the situation on the host.

Totally agree. I am not saying we should enable the feature by default
but at lesat consider it for the future. I have something in mind to
be useful later.

> 
> [..]
> > > +static int zram_recompress(struct zram *zram, u32 index, struct page *page,
> > > +			   int size_watermark)
> > > +{
> > > +	unsigned long handle_prev;
> > > +	unsigned long handle_next;
> > > +	unsigned int comp_len_next;
> > > +	unsigned int comp_len_prev;
> > 
> > How about orig_handle and new_nandle with orig_comp_len and new_comp_len?
> 
> No opinion. Can we have prev and next? :)

prev and next gives the impression position something like list.
orig and new gives the impression stale and fresh.

We are doing latter here.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ