lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMgjq7DoV=ZdHeREeMq1=hKzD_O40NkfHCym1Wo9m=J=cBnUvw@mail.gmail.com>
Date: Thu, 20 Feb 2025 10:48:27 +0800
From: Kairui Song <ryncsn@...il.com>
To: Baoquan He <bhe@...hat.com>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>, 
	Chris Li <chrisl@...nel.org>, Barry Song <v-songbaohua@...o.com>, 
	Hugh Dickins <hughd@...gle.com>, Yosry Ahmed <yosryahmed@...gle.com>, 
	"Huang, Ying" <ying.huang@...ux.alibaba.com>, Nhat Pham <nphamcs@...il.com>, 
	Johannes Weiner <hannes@...xchg.org>, Kalesh Singh <kaleshsingh@...gle.com>, 
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 5/7] mm, swap: use percpu cluster as allocation fast path

On Thu, Feb 20, 2025 at 10:35 AM Baoquan He <bhe@...hat.com> wrote:
>
> On 02/19/25 at 07:12pm, Kairui Song wrote:
> >
> > > n reality it may be very difficult to achieve the 'each 2M space has been consumed for each order',
> >
> > Very true, but notice for order >= 1, slot cache never worked before.
> > And for order == 0, it's very likely that a cluster will have more
> > than 64 slots usable. The test result I posted should be a good
> > example, and device is very full during the test, and performance is
> > basically identical to before. My only concern was about the device
>
> My worry is the global percpu cluster may impact performance among
> multiple swap devices. Before, per si percpu cluster will cache the
> valid offset in one cluster for each order. For multiple swap devices,
> this consumes a little bit more percpu memory. While the new global
> percpu cluster could be updated to a different swap device easily only
> of one order is available, then the whole array is invalid. That looks a
> little drastic cmpared with before.

Ah, now I got what you mean. That's seems could be a problem indeed.

I think I can change the

+struct percpu_swap_cluster {
+       struct swap_info_struct *si;

to

+struct percpu_swap_cluster {
+       struct swap_info_struct *si[SWAP_NR_ORDERS];

Or embed the swp type in the offset, this way each order won't affect
each other.  How do you think?

Previously high order allocation will bypass slot cache so allocation
could happen on different same priority devices too. So the behaviour
that each order using different device should be acceptable.

>
> Yeah, the example you shown looks good. Wonder how many swap devices are
> simulated in your example.
>
> > rotating, as slot cache never worked for order >= 1, so the device
> > rotates was very frequently. But still seems no one really cared about
> > it, mthp swapout is a new thing and the previous rotation rule seems
> > even more confusing than this new idea.
>
> I never contact a real product environment with multiple tier and
> many swap devices. In reality, with my shallow knowledge, usually only
> one swap device is deployed. If that's true in most of time, the old
> code or new code is fine, otherwise, seems we may need consider the
> impact.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ