lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <SJ0PR11MB56781233ABFE772C5991AB01C9362@SJ0PR11MB5678.namprd11.prod.outlook.com>
Date: Tue, 3 Dec 2024 01:01:05 +0000
From: "Sridhar, Kanchana P" <kanchana.p.sridhar@...el.com>
To: Yosry Ahmed <yosryahmed@...gle.com>, Chengming Zhou
	<chengming.zhou@...ux.dev>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>, "hannes@...xchg.org"
	<hannes@...xchg.org>, "nphamcs@...il.com" <nphamcs@...il.com>,
	"usamaarif642@...il.com" <usamaarif642@...il.com>, "ryan.roberts@....com"
	<ryan.roberts@....com>, "21cnbao@...il.com" <21cnbao@...il.com>,
	"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>, "Feghali, Wajdi K"
	<wajdi.k.feghali@...el.com>, "Gopal, Vinodh" <vinodh.gopal@...el.com>,
	"Sridhar, Kanchana P" <kanchana.p.sridhar@...el.com>
Subject: RE: [PATCH v1 2/2] mm: zswap: zswap_store_pages() simplifications for
 batching.

Hi Chengming, Yosry,

> -----Original Message-----
> From: Yosry Ahmed <yosryahmed@...gle.com>
> Sent: Monday, December 2, 2024 11:33 AM
> To: Chengming Zhou <chengming.zhou@...ux.dev>
> Cc: Sridhar, Kanchana P <kanchana.p.sridhar@...el.com>; linux-
> kernel@...r.kernel.org; linux-mm@...ck.org; hannes@...xchg.org;
> nphamcs@...il.com; usamaarif642@...il.com; ryan.roberts@....com;
> 21cnbao@...il.com; akpm@...ux-foundation.org; Feghali, Wajdi K
> <wajdi.k.feghali@...el.com>; Gopal, Vinodh <vinodh.gopal@...el.com>
> Subject: Re: [PATCH v1 2/2] mm: zswap: zswap_store_pages() simplifications
> for batching.
> 
> On Wed, Nov 27, 2024 at 11:00 PM Chengming Zhou
> <chengming.zhou@...ux.dev> wrote:
> >
> > On 2024/11/28 06:53, Kanchana P Sridhar wrote:
> > > In order to set up zswap_store_pages() to enable a clean batching
> > > implementation in [1], this patch implements the following changes:
> > >
> > > 1) Addition of zswap_alloc_entries() which will allocate zswap entries for
> > >     all pages in the specified range for the folio, upfront. If this fails,
> > >     we return an error status to zswap_store().
> > >
> > > 2) Addition of zswap_compress_pages() that calls zswap_compress() for
> each
> > >     page, and returns false if any zswap_compress() fails, so
> > >     zswap_store_page() can cleanup resources allocated and return an
> error
> > >     status to zswap_store().
> > >
> > > 3) A "store_pages_failed" label that is a catch-all for all failure points
> > >     in zswap_store_pages(). This facilitates cleaner error handling within
> > >     zswap_store_pages(), which will become important for IAA compress
> > >     batching in [1].
> > >
> > > [1]: https://patchwork.kernel.org/project/linux-mm/list/?series=911935
> > >
> > > Signed-off-by: Kanchana P Sridhar <kanchana.p.sridhar@...el.com>
> > > ---
> > >   mm/zswap.c | 93 +++++++++++++++++++++++++++++++++++++++++----
> ---------
> > >   1 file changed, 71 insertions(+), 22 deletions(-)
> > >
> > > diff --git a/mm/zswap.c b/mm/zswap.c
> > > index b09d1023e775..db80c66e2205 100644
> > > --- a/mm/zswap.c
> > > +++ b/mm/zswap.c
> > > @@ -1409,9 +1409,56 @@ static void shrink_worker(struct work_struct
> *w)
> > >   * main API
> > >   **********************************/
> > >
> > > +static bool zswap_compress_pages(struct page *pages[],
> > > +                              struct zswap_entry *entries[],
> > > +                              u8 nr_pages,
> > > +                              struct zswap_pool *pool)
> > > +{
> > > +     u8 i;
> > > +
> > > +     for (i = 0; i < nr_pages; ++i) {
> > > +             if (!zswap_compress(pages[i], entries[i], pool))
> > > +                     return false;
> > > +     }
> > > +
> > > +     return true;
> > > +}
> >
> > How about introducing a `zswap_compress_folio()` interface which
> > can be used by `zswap_store()`?
> > ```
> > zswap_store()
> >         nr_pages = folio_nr_pages(folio)
> >
> >         entries = zswap_alloc_entries(nr_pages)
> >
> >         ret = zswap_compress_folio(folio, entries, pool)
> >
> >         // store entries into xarray and LRU list
> > ```
> >
> > And this version `zswap_compress_folio()` is very simple for now:
> > ```
> > zswap_compress_folio()
> >         nr_pages = folio_nr_pages(folio)
> >
> >         for (index = 0; index < nr_pages; ++index) {
> >                 struct page *page = folio_page(folio, index);
> >
> >                 if (!zswap_compress(page, &entries[index], pool))
> >                         return false;
> >         }
> >
> >         return true;
> > ```
> > This can be easily extended to support your "batched" version.
> >
> > Then the old `zswap_store_page()` could be removed.
> >
> > The good point is simplicity, that we don't need to slice folio
> > into multiple batches, then repeat the common operations for each
> > batch, like preparing entries, storing into xarray and LRU list...
> 
> +1

Thanks for the code review comments. One question though: would
it make sense to trade-off the memory footprint cost with the code
simplification? For instance, lets say we want to store a 64k folio.
We would allocate memory for 16 zswap entries, and lets say one of
the compressions fails, we would deallocate memory for all 16 zswap
entries. Could this lead to zswap_entry kmem_cache starvation and
subsequent zswap_store() failures in multiple processes scenarios?

In other words, allocating entries in smaller batches -- more specifically,
only the compress batchsize -- seems to strike a balance in terms of
memory footprint, while mitigating the starvation aspect, and possibly
also helping latency (allocating a large # of zswap entries and potentially
deallocating, could impact latency).

If we agree with the merits of processing a large folio in smaller batches:
this in turn requires we store the smaller batches of entries in the
xarray/LRU before moving to the next batch. Which means all the
zswap_store() ops need to be done for a batch before moving to the next
batch.

> 
> Also, I don't like the helpers hiding some of the loops and leaving
> others, as Johannes said, please keep all the iteration over pages at
> the same function level where possible to make the code clear.

Sure. I can either inline all the loops into zswap_store_pages(), or convert
all iterations into helpers with a consistent signature:

zswap_<proc_name>(arrayed_struct, nr_pages);

Please let me know which would work best. Thanks!

> 
> This should not be a separate series too, when I said divide into
> chunks I meant leave out the multiple folios batching and focus on
> batching pages in a single large folio, not breaking down the series
> into multiple ones. Not a big deal tho :)

I understand. I am trying to de-couple and develop in parallel the
following, which I intend to converge into a v5 of the original series [1]:
  a) Vectorization, followed by batching of zswap_store() of large folios.
  b) acomp request chaining suggestions from Herbert, which could
       change the existing v4 implementation of the
       crypto_acomp_batch_compress() API that zswap would need to
       call for IAA compress batching.

[1]: https://patchwork.kernel.org/project/linux-mm/list/?series=911935

Thanks,
Kanchana

> 
> >
> > Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ