[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKEwX=NyV_V2f9nGy7814m7ax5T_-J70HjQKdqCrf1vG-Tn_Vw@mail.gmail.com>
Date: Thu, 14 Dec 2023 12:29:21 -0800
From: Nhat Pham <nphamcs@...il.com>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Chengming Zhou <zhouchengming@...edance.com>, Andrew Morton <akpm@...ux-foundation.org>,
Chris Li <chriscli@...gle.com>, Johannes Weiner <hannes@...xchg.org>,
Seth Jennings <sjenning@...hat.com>, Dan Streetman <ddstreet@...e.org>,
Vitaly Wool <vitaly.wool@...sulko.com>, linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 2/5] mm/zswap: change dstmem size to one page
On Thu, Dec 14, 2023 at 10:30 AM Yosry Ahmed <yosryahmed@...gle.com> wrote:
>
> On Thu, Dec 14, 2023 at 5:57 AM Chengming Zhou
> <zhouchengming@...edance.com> wrote:
> >
> > On 2023/12/14 21:37, Yosry Ahmed wrote:
> > > On Thu, Dec 14, 2023 at 5:33 AM Chengming Zhou
> > > <zhouchengming@...edance.com> wrote:
> > >>
> > >> On 2023/12/14 08:18, Nhat Pham wrote:
> > >>> On Wed, Dec 13, 2023 at 3:34 PM Yosry Ahmed <yosryahmed@...gle.com> wrote:
> > >>>>
> > >>>> On Tue, Dec 12, 2023 at 8:18 PM Chengming Zhou
> > >>>> <zhouchengming@...edance.com> wrote:
> > >>>>>
> > >>>>> Change the dstmem size from 2 * PAGE_SIZE to only one page since
> > >>>>> we only need at most one page when compress, and the "dlen" is also
> > >>>>> PAGE_SIZE in acomp_request_set_params(). If the output size > PAGE_SIZE
> > >>>>> we don't wanna store the output in zswap anyway.
> > >>>>>
> > >>>>> So change it to one page, and delete the stale comment.
> > >>>>
> > >>>> I couldn't find the history of why we needed 2 * PAGE_SIZE, it would
> > >>>> be nice if someone has the context, perhaps one of the maintainers.
> > >>>
> > >>> It'd be very nice indeed.
> > >>>
> > >>>>
> > >>>> One potential reason is that we used to store a zswap header
> > >>>> containing the swap entry in the compressed page for writeback
> > >>>> purposes, but we don't do that anymore. Maybe we wanted to be able to
> > >>>> handle the case where an incompressible page would exceed PAGE_SIZE
> > >>>> because of that?
> > >>>
> > >>> It could be hmm. I didn't study the old zswap architecture too much,
> > >>> but it has been 2 * PAGE_SIZE since the time zswap was first merged
> > >>> last I checked.
> > >>> I'm not 100% comfortable ACK-ing the undoing of something that looks
> > >>> so intentional, but FTR, AFAICT, this looks correct to me.
> > >>
> > >> Right, there is no any history about the reason why we needed 2 pages.
> > >> But obviously only one page is needed from the current code and no any
> > >> problem found in the kernel build stress testing.
> > >
> > > Could you try manually stressing the compression with data that
> > > doesn't compress at all (i.e. dlen == PAGE_SIZE)? I want to make sure
> > > that this case is specifically handled. I think using data from
> > > /dev/random will do that but please double check that dlen ==
> > > PAGE_SIZE.
FWIW, zsmalloc supports the storing of pages that are PAGE_SIZE in
length, so a use case is probably there (although it could be for
ZRAM). We tested it during the storing-uncompressed-pages patch.
Architecturally, it seems that zswap just lets the backend allocator
handle the rejection of compressed objects that are too large, and the
compressor to reject pages that are too poorly compressed.
> >
> > I just did the same kernel build testing, indeed there are a few cases
> > that output dlen == PAGE_SIZE.
> >
> > bpftrace -e 'k:zpool_malloc {@[(uint32)arg1==4096]=count()}'
> >
> > @[1]: 2
> > @[0]: 12011430
>
> That's very useful information, thanks for testing that. Please
> include this in the commit log. Please also include the fact that we
> used to store a zswap header with the compressed page but don't do
> that anymore, which *may* be the reason why this was needed back then.
>
> I still want someone who knows the history to Ack this, but FWIW it
> looks correct to me, so low-key:
> Reviewed-by: Yosry Ahmed <yosryahmed@...gle.com>
Anyway:
Reviewed-by: Nhat Pham <nphamcs@...il.com>
Powered by blists - more mailing lists