[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250403150055.94a38bc7e6e3f618fbc23ddd@linux-foundation.org>
Date: Thu, 3 Apr 2025 15:00:55 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Rik van Riel <riel@...riel.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org, kernel-team@...a.com,
Vinay Banakar <vny@...gle.com>, liuye <liuye@...inos.cn>, Hugh Dickins
<hughd@...gle.com>, Mel Gorman <mgorman@...hsingularity.net>, Yu Zhao
<yuzhao@...gle.com>, Shakeel Butt <shakeel.butt@...ux.dev>
Subject: Re: [PATCH v2] mm/vmscan: batch TLB flush during memory reclaim
On Fri, 28 Mar 2025 14:20:55 -0400 Rik van Riel <riel@...riel.com> wrote:
> The current implementation in shrink_folio_list() performs a full TLB
> flush for every individual folio reclaimed. This causes unnecessary
> overhead during memory reclaim.
>
> The current code:
> 1. Clears PTEs and unmaps each page individually
> 2. Performs a full TLB flush on every CPU the mm is running on
>
> The new code:
> 1. Clears PTEs and unmaps each page individually
> 2. Adds each unmapped page to pageout_folios
> 3. Flushes the TLB once before procesing pageout_folios
>
> This reduces the number of TLB flushes issued by the memory reclaim
> code by 1/N, where N is the number of mapped folios encountered in
> the batch processed by shrink_folio_list.
Were any runtime benefits observable?
Powered by blists - more mailing lists