[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4yUAw_tzX7z8iizToMB8SDJPNOhFRZNXva_ae46q5vRwg@mail.gmail.com>
Date: Sat, 29 Nov 2025 17:55:34 +0800
From: Barry Song <21cnbao@...il.com>
To: Sergey Senozhatsky <senozhatsky@...omium.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Richard Chang <richardycc@...gle.com>,
Brian Geffon <bgeffon@...gle.com>, Minchan Kim <minchan@...nel.org>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-block@...r.kernel.org,
Minchan Kim <minchan@...gle.com>
Subject: Re: [PATCH 1/2] zram: introduce compressed data writeback
On Sat, Nov 29, 2025 at 1:06 AM Sergey Senozhatsky
<senozhatsky@...omium.org> wrote:
>
> From: Richard Chang <richardycc@...gle.com>
>
Hi Richard, Sergey,
Thanks a lot for developing this. For years, people have been looking for
compressed data writeback to reduce I/O, such as compacting multiple compressed
blocks into a single page on block devices. I guess this patchset hasn’t reached
that point yet, right?
> zram stores all written back slots raw, which implies that
> during writeback zram first has to decompress slots (except
> for ZRAM_HUGE slots, which are raw already). The problem
> with this approach is that not every written back page gets
> read back (either via read() or via page-fault), which means
> that zram basically wastes CPU cycles and battery decompressing
> such slots. This changes with introduction of decompression
If a page is swapped out and never read again, does that actually indicate
a memory leak in userspace?
So the main benefit of this patch so far is actually avoiding decompression
for "leaked" anon pages, which might still have a pointer but are
never accessed again?
> on demand, in other words decompression on read()/page-fault.
Thanks
Barry
Powered by blists - more mailing lists