lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKEwX=Na6kgGLsnknkfsc75hk-Q690z0J9rh=S=BmK2qjVU3rw@mail.gmail.com>
Date: Thu, 3 Apr 2025 13:38:26 -0700
From: Nhat Pham <nphamcs@...il.com>
To: Joshua Hahn <joshua.hahnjy@...il.com>
Cc: Yosry Ahmed <yosryahmed@...gle.com>, akpm@...ux-foundation.org, hannes@...xchg.org, 
	cerasuolodomenico@...il.com, sjenning@...hat.com, ddstreet@...e.org, 
	vitaly.wool@...sulko.com, hughd@...gle.com, corbet@....net, 
	konrad.wilk@...cle.com, senozhatsky@...omium.org, rppt@...nel.org, 
	linux-mm@...ck.org, kernel-team@...a.com, linux-kernel@...r.kernel.org, 
	david@...t.cz, Minchan Kim <minchan@...nel.org>, Shakeel Butt <shakeel.butt@...ux.dev>, 
	Chengming Zhou <chengming.zhou@...ux.dev>, Kairui Song <ryncsn@...il.com>
Subject: Re: [PATCH 0/2] minimize swapping on zswap store failure

On Wed, Apr 2, 2025 at 1:06 PM Joshua Hahn <joshua.hahnjy@...il.com> wrote:
>
> On Mon, 16 Oct 2023 17:57:31 -0700 Yosry Ahmed <yosryahmed@...gle.com> wrote:
>
> > On Mon, Oct 16, 2023 at 5:35 PM Nhat Pham <nphamcs@...il.com> wrote:
>
> > I thought before about having a special list_head that allows us to
> > use the lower bits of the pointers as markers, similar to the xarray.
> > The markers can be used to place different objects on the same list.
> > We can have a list that is a mixture of struct page and struct
> > zswap_entry. I never pursued this idea, and I am sure someone will
> > scream at me for suggesting it. Maybe there is a less convoluted way
> > to keep the LRU ordering intact without allocating memory on the
> > reclaim path.
>
> Hi Yosry,
>
> Apologies for reviving an old thread, but I wasn't sure whether opening an
> entirely new thread was a better choice : -)
>
> So I've implemented your idea, using the lower 2 bits of the list_head's prev
> pointer (last bit indicates whether the list_head belongs to a page or a
> zswap_entry, and the second to last bit was repurposed for the second chance
> algorithm).
>
> For a very high level overview what I did in the patch:
> - When a page fails to compress, I remove the page mapping and tag both the
>   xarray entry (tag == set lowest bit to 1) and the page's list_head prev ptr,
>   then store the page directly into the zswap LRU.
> - In zswap_load, we take the entry out of the xarray and check if it's tagged.
>   - If it is tagged, then instead of decompressing, we just copy the page's
>     contents to the newly allocated page.
> - (More details about how to teach vmscan / page_io / list iterators how to
>   handle this, but we can gloss over those details for now)
>
> I have a working version, but have been holding off because I have only been
> seeing regressions. I wasn't really sure where they were coming from, but
> after going through some perf traces with Nhat, found out that the regressions
> come from the associated page faults that come from initially unmapping the
> page, and then re-allocating it for every load. This causes (1) more memcg
> flushing, and (2) extra allocations ==> more pressure ==> more reclaim, even
> though we only temporarily keep the extra page.

Thanks for your effort on this idea :)

>
> Just wanted to put this here in case you were still thinking about this idea.
> What do you think? Ideally, there would be a way to keep the page around in
> the zswap LRU, but do not have to re-allocate a new page on a fault, but this
> seems like a bigger task.

I wonder if we can return the page in the event of a page fault. We'll
need to keep it in the swap cache for this to work:

1. On reclaim, do the same thing as your prototype but keep the page
in swap cache (i.e do not remove_mapping() it).

2. On page fault (do_swap_page), before returning check if the page is
in zswap LRU. If it is, invalidate the zswap LRU linkage, and put it
back to one of the proper LRUs.

Johannes, do you feel like this is possible?

>
> Ultimately the goal is to prevent an incompressible page from hoarding the
> compression algorithm on multiple reclaim attempts, but if we are spending
> more time by allocating new pages... maybe this isn't the correct approach :(

Hmmm, IIUC this problem also exists with zram, since zram allocates a
PAGE_SIZE sized buffer to hold the original page's content. I will
note though that zram seems to favor these kinds of pages for
writeback :) Maybe this is why...?

(+ Minchan)

>
> Please let me know if you have any thoughts on this : -)

Well worst case scenario there is still the special incompressible LRU
idea. We'll need some worker thread to check for write access to these
pages to promote them though.

(+ Shakeel)

> Have a great day!
> Joshua
>
> Sent using hkml (https://github.com/sjp38/hackermail)
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ