lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 14 Jun 2023 10:59:53 -0400
From:   Johannes Weiner <hannes@...xchg.org>
To:     Yosry Ahmed <yosryahmed@...gle.com>
Cc:     Sergey Senozhatsky <senozhatsky@...omium.org>,
        Minchan Kim <minchan@...nel.org>,
        Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Seth Jennings <sjenning@...hat.com>,
        Dan Streetman <ddstreet@...e.org>,
        Vitaly Wool <vitaly.wool@...sulko.com>,
        Nhat Pham <nphamcs@...il.com>,
        Domenico Cerasuolo <cerasuolodomenico@...il.com>,
        Yu Zhao <yuzhao@...gle.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: zswap: multiple zpool support

On Tue, Jun 13, 2023 at 01:13:59PM -0700, Yosry Ahmed wrote:
> On Mon, Jun 5, 2023 at 6:56 PM Yosry Ahmed <yosryahmed@...gle.com> wrote:
> >
> > On Fri, Jun 2, 2023 at 1:24 PM Johannes Weiner <hannes@...xchg.org> wrote:
> > > Sorry, I should have been more precise.
> > >
> > > I'm saying that using NR_CPUS pools, and replacing the hash with
> > > smp_processor_id(), would accomplish your goal of pool concurrency.
> > > But it would do so with a broadly-used, well-understood scaling
> > > factor. We might not need a config option at all.
> > >
> > > The lock would still be there, but contention would be reduced fairly
> > > optimally (barring preemption) for store concurrency at least. Not
> > > fully eliminated due to frees and compaction, though, yes.
> 
> I thought about this again and had some internal discussions, and I am
> more unsure about it. Beyond the memory overhead, having too many
> zpools might result in higher fragmentation within the zpools. For
> zsmalloc, we do not compact across multiple zpools for example.
> 
> We have been using a specific number of zpools in our production for
> years now, we know it can be tuned to achieve performance gains. OTOH,
> percpu zpools (or NR_CPUS pools) seems like too big of a hammer,
> probably too many zpools in a lot of cases, and we wouldn't know how
> many zpools actually fits our workloads.

Is it the same number across your entire fleet and all workloads?

How large *is* the number in relation to CPUs?

> I see value in allowing the number of zpools to be directly
> configurable (it can always be left as 1), and am worried that with
> percpu we would be throwing away years of production testing for an
> unknown.
> 
> I am obviously biased, but I don't think this adds significant
> complexity to the zswap code as-is (or as v2 is to be precise).

I had typed out this long list of reasons why I hate this change, and
then deleted it to suggest the per-cpu scaling factor.

But to summarize my POV, I think a user-facing config option for this
is completely inappropriate. There are no limits, no guidance, no sane
default. And it's very selective about micro-optimizing this one lock
when there are several locks and datastructures of the same scope in
the swap path. This isn't a reasonable question to ask people building
kernels. It's writing code through the Kconfig file.

Data structure scalability should be solved in code, not with config
options.

My vote on the patch as proposed is NAK.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ