[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <jbf64ctbcquh3jvcoioszpiw4ucdxs3olr45fwtfgobifwxw27@mcxxyyji4ltb>
Date: Tue, 8 Apr 2025 12:33:37 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Joshua Hahn <joshua.hahnjy@...il.com>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>,
Nhat Pham <nphamcs@...il.com>, Yosry Ahmed <yosryahmed@...gle.com>, akpm@...ux-foundation.org,
hannes@...xchg.org, cerasuolodomenico@...il.com, sjenning@...hat.com,
ddstreet@...e.org, vitaly.wool@...sulko.com, hughd@...gle.com, corbet@....net,
konrad.wilk@...cle.com, rppt@...nel.org, linux-mm@...ck.org, kernel-team@...a.com,
linux-kernel@...r.kernel.org, david@...t.cz, Minchan Kim <minchan@...nel.org>,
Shakeel Butt <shakeel.butt@...ux.dev>, Chengming Zhou <chengming.zhou@...ux.dev>,
Kairui Song <ryncsn@...il.com>
Subject: Re: [PATCH 0/2] minimize swapping on zswap store failure
Hi,
Sorry for the delay
On (25/04/04 07:06), Joshua Hahn wrote:
> On Fri, 4 Apr 2025 10:46:22 +0900 Sergey Senozhatsky <senozhatsky@...omium.org> wrote:
>
> > On (25/04/03 13:38), Nhat Pham wrote:
> > > > Ultimately the goal is to prevent an incompressible page from hoarding the
> > > > compression algorithm on multiple reclaim attempts, but if we are spending
> > > > more time by allocating new pages... maybe this isn't the correct approach :(
> > >
> > > Hmmm, IIUC this problem also exists with zram, since zram allocates a
> > > PAGE_SIZE sized buffer to hold the original page's content. I will
> > > note though that zram seems to favor these kinds of pages for
> > > writeback :) Maybe this is why...?
> >
> > zram is a generic block device, it must store whatever comes in,
> > compressible or incompressible. E.g. when we have, say, ext4
> > running atop of the zram device we cannot reject page stores.
> >
> > And you are right, when we use zram for swap, there is some benefit
> > in storing incompressible pages. First, those pages are candidates
> > for zram writeback, which achieves the goal of removing the page from
> > RAM after all, we give up on the incompressible page reclamation with
> > "return it back to LRU" approach. Second, on some zram setups we do
> > re-compression (with a slower and more efficient algorithm) and in
> > certain number of cases what is incompressible with the primary (fast)
> > algorithm is compressible with the secondary algorithm.
>
> Hello Sergey,
>
> Thank you for your insight, I did not know this is how zram handled
> incompressible pages.
Well, yes, zram doesn't have a freedom to reject writes, to the
fs/vfs that would look like a block device error.
[..]
> On the note of trying a second compression algorithm -- do you know how much
> of the initially incompressible pages get compressed later?
So I don't recall the exact numbers, but, if I'm not mistaken, in
my tests (on chromeos) I think I saw something like 20+% (a little
higher than just 20%) success rate (successful re-compression with
a secondary algorithm), but like you said this is very data patterns
specific.
> Thank you again for your response! Have a great day : -)
Thanks, you too!
Powered by blists - more mailing lists