lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CALf+9YdLB3U7cjY8xi0PQjqPJ_YKformTZFQ4ZxXo_ZzsEwCog@mail.gmail.com>
Date: Thu, 23 Jan 2025 13:16:16 -0600
From: Vinay Banakar <vny@...gle.com>
To: Rik van Riel <riel@...riel.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org, 
	akpm@...ux-foundation.org, willy@...radead.org, mgorman@...e.de, 
	Wei Xu <weixugc@...gle.com>, Greg Thelen <gthelen@...gle.com>
Subject: Re: [PATCH] mm: Optimize TLB flushes during page reclaim

On Wed, Jan 22, 2025 at 10:20 PM Rik van Riel <riel@...riel.com> wrote:
> I see how moving the arch_tlbbatch_flush to
> between unmapping the pages, and issuing the
> IO could reduce TLB flushing operations
> significantly.
>
> However, how does that lead to PMD level
> operations?
>
> I don't see any code in your patch that gathers
> things at the PMD level. The code simply operates
> on whatever folio size is on the list that is
> being passed to shrink_folio_list()
>
> Is the PMD level thing some MGLRU specific feature?
> My main quibble is with the changelog, and the
> comment that refers to "PMD level" operations,
> when the code does not appear to be doing any
> PMD level coalescing.

I initially assumed that all paths leading to shrink_folio_list()
submitted 512 pages at a time. Turns out this is only true for the
madvise path.
The other paths have different batch sizes:
- shrink_inactive_list(): Uses SWAP_CLUSTER_MAX (32 pages)
- evict_folios(): Varies between 64 and 4096 pages (MGLRU)
- damon_pa_pageout(): Variable size based on DAMON region
- reclaim_clean_pages_from_list(): Clean pages only, unaffected by this patch

We have two options:
1. Keep the current logic where TLB flush batching varies by caller
2. Enforce consistent 512-page batching in shrink_folio_list() and
also convert to folio_batch as suggested by Matthew

> > I'd appreciate your feedback on this approach, especially on the
> > correctness of batched BIO submissions. Looking forward to your
> > comments.
> Maybe the filesystem people have more comments on
> this aspect, but from a VM perspective I suspect
> that doing that batch flush in one spot, and then
> iterating over the pages should be fine.

Thank you for commenting on this!

Vinay

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ