lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87zftlx25p.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Mon, 22 Apr 2024 15:54:58 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Kairui Song <ryncsn@...il.com>, Matthew Wilcox <willy@...radead.org>
Cc: linux-mm@...ck.org,  Kairui Song <kasong@...cent.com>,  Andrew Morton
 <akpm@...ux-foundation.org>,  Chris Li <chrisl@...nel.org>,  Barry Song
 <v-songbaohua@...o.com>,  Ryan Roberts <ryan.roberts@....com>,  Neil Brown
 <neilb@...e.de>,  Minchan Kim <minchan@...nel.org>,  Hugh Dickins
 <hughd@...gle.com>,  David Hildenbrand <david@...hat.com>,  Yosry Ahmed
 <yosryahmed@...gle.com>,  linux-fsdevel@...r.kernel.org,
  linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/8] mm/swap: optimize swap cache search space

Hi, Kairui,

Kairui Song <ryncsn@...il.com> writes:

> From: Kairui Song <kasong@...cent.com>
>
> Currently we use one swap_address_space for every 64M chunk to reduce lock
> contention, this is like having a set of smaller swap files inside one
> big swap file. But when doing swap cache look up or insert, we are
> still using the offset of the whole large swap file. This is OK for
> correctness, as the offset (key) is unique.
>
> But Xarray is specially optimized for small indexes, it creates the
> redix tree levels lazily to be just enough to fit the largest key
> stored in one Xarray. So we are wasting tree nodes unnecessarily.
>
> For 64M chunk it should only take at most 3 level to contain everything.
> But we are using the offset from the whole swap file, so the offset (key)
> value will be way beyond 64M, and so will the tree level.
>
> Optimize this by reduce the swap cache search space into 64M scope.

In general, I think that it makes sense to reduce the depth of the
xarray.

One concern is that IIUC we make swap cache behaves like file cache if
possible.  And your change makes swap cache and file cache diverge more.
Is it possible for us to keep them similar?

For example,

Is it possible to return the offset inside 64M range in
__page_file_index() (maybe rename it)?

Is it possible to add "start_offset" support in xarray, so "index"
will subtract "start_offset" before looking up / inserting?

Is it possible to use multiple range locks to protect one xarray to
improve the lock scalability?  This is why we have multiple "struct
address_space" for one swap device.  And, we may have same lock
contention issue for large files too.

I haven't look at the code in details.  So, my idea may not make sense
at all.  If so, sorry about that.

Hi, Matthew,

Can you teach me on this too?

--
Best Regards,
Huang, Ying

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ