lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMgjq7Buvn4f4U8uwpZB3e-rWgCcpvfd5uopgUqQxrVcgGcEMQ@mail.gmail.com>
Date: Mon, 29 Apr 2024 01:45:49 +0800
From: Kairui Song <ryncsn@...il.com>
To: Chris Li <chrisl@...nel.org>
Cc: "Huang, Ying" <ying.huang@...el.com>, Matthew Wilcox <willy@...radead.org>, linux-mm@...ck.org, 
	Andrew Morton <akpm@...ux-foundation.org>, Barry Song <v-songbaohua@...o.com>, 
	Ryan Roberts <ryan.roberts@....com>, Neil Brown <neilb@...e.de>, Minchan Kim <minchan@...nel.org>, 
	Hugh Dickins <hughd@...gle.com>, David Hildenbrand <david@...hat.com>, Yosry Ahmed <yosryahmed@...gle.com>, 
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/8] mm/swap: optimize swap cache search space

On Mon, Apr 29, 2024 at 1:37 AM Kairui Song <ryncsn@...il.com> wrote:
>
> On Sat, Apr 27, 2024 at 7:16 AM Chris Li <chrisl@...nel.org> wrote:
> >
> > Hi Ying,
> >
> > On Tue, Apr 23, 2024 at 7:26 PM Huang, Ying <ying.huang@...el.com> wrote:
> > >
> > > Hi, Matthew,
> > >
> > > Matthew Wilcox <willy@...radead.org> writes:
> > >
> > > > On Mon, Apr 22, 2024 at 03:54:58PM +0800, Huang, Ying wrote:
> > > >> Is it possible to add "start_offset" support in xarray, so "index"
> > > >> will subtract "start_offset" before looking up / inserting?
> > > >
> > > > We kind of have that with XA_FLAGS_ZERO_BUSY which is used for
> > > > XA_FLAGS_ALLOC1.  But that's just one bit for the entry at 0.  We could
> > > > generalise it, but then we'd have to store that somewhere and there's
> > > > no obvious good place to store it that wouldn't enlarge struct xarray,
> > > > which I'd be reluctant to do.
> > > >
> > > >> Is it possible to use multiple range locks to protect one xarray to
> > > >> improve the lock scalability?  This is why we have multiple "struct
> > > >> address_space" for one swap device.  And, we may have same lock
> > > >> contention issue for large files too.
> > > >
> > > > It's something I've considered.  The issue is search marks.  If we delete
> > > > an entry, we may have to walk all the way up the xarray clearing bits as
> > > > we go and I'd rather not grab a lock at each level.  There's a convenient
> > > > 4 byte hole between nr_values and parent where we could put it.
> > > >
> > > > Oh, another issue is that we use i_pages.xa_lock to synchronise
> > > > address_space.nrpages, so I'm not sure that a per-node lock will help.
> > >
> > > Thanks for looking at this.
> > >
> > > > But I'm conscious that there are workloads which show contention on
> > > > xa_lock as their limiting factor, so I'm open to ideas to improve all
> > > > these things.
> > >
> > > I have no idea so far because my very limited knowledge about xarray.
> >
> > For the swap file usage, I have been considering an idea to remove the
> > index part of the xarray from swap cache. Swap cache is different from
> > file cache in a few aspects.
> > For one if we want to have a folio equivalent of "large swap entry".
> > Then the natural alignment of those swap offset on does not make
> > sense. Ideally we should be able to write the folio to un-aligned swap
> > file locations.
> >
>
> Hi Chris,
>
> This sound interesting, I have a few questions though...
>
> Are you suggesting we handle swap on file and swap on device
> differently? Swap on file is much less frequently used than swap on
> device I think.
>
> And what do you mean "index part of the xarray"? If we need a cache,
> xarray still seems one of the best choices to hold the content.
>
> > The other aspect for swap files is that, we already have different
> > data structures organized around swap offset, swap_map and
> > swap_cgroup. If we group the swap related data structure together. We
> > can add a pointer to a union of folio or a shadow swap entry. We can
> > use atomic updates on the swap struct member or breakdown the access
> > lock by ranges just like swap cluster does.

Oh, and BTW I'm also trying to breakdown the swap address space range
(from 64M to 16M, SWAP_ADDRESS_SPACE_SHIFT from 14 to
12). It's a simple approach, but the coupling and increased memory
usage of address_space structure makes the performance go into
regression (about -2% for worst real world workload). I found this
part very performance sensitive, so basically I'm not making much
progress for the future items I mentioned in this cover letter. New
ideas could be very helpful!

> >
> > I want to discuss those ideas in the upcoming LSF/MM meet up as well.
>
> Looking forward to it!
>
> >
> > Chris

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ