[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4119c1d0-5010-b2e7-3f1c-edd37f16f1f2@huawei.com>
Date: Wed, 26 Mar 2025 20:42:31 +0800
From: Jinjiang Tu <tujinjiang@...wei.com>
To: <yangge1116@....com>, <akpm@...ux-foundation.org>
CC: <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
<stable@...r.kernel.org>, <21cnbao@...il.com>, <david@...hat.com>,
<baolin.wang@...ux.alibaba.com>, <aneesh.kumar@...ux.ibm.com>,
<liuzixing@...on.cn>, Kefeng Wang <wangkefeng.wang@...wei.com>
Subject: Re: [PATCH V4] mm/gup: Clear the LRU flag of a page before adding to
LRU batch
Hi,
We notiched a 12.3% performance regression for LibMicro pwrite testcase due to
commit 33dfe9204f29 ("mm/gup: clear the LRU flag of a page before adding to LRU batch").
The testcase is executed as follows, and the file is tmpfs file.
pwrite -E -C 200 -L -S -W -N "pwrite_t1k" -s 1k -I 500 -f $TFILE
this testcase writes 1KB (only one page) to the tmpfs and repeats this step for many times. The Flame
graph shows the performance regression comes from folio_mark_accessed() and workingset_activation().
folio_mark_accessed() is called for the same page for many times. Before this patch, each call will
add the page to cpu_fbatches.activate. When the fbatch is full, the fbatch is drained and the page
is promoted to active list. And then, folio_mark_accessed() does nothing in later calls.
But after this patch, the folio clear lru flags after it is added to cpu_fbatches.activate. After then,
folio_mark_accessed will never call folio_activate() again due to the page is without lru flag, and
the fbatch will not be full and the folio will not be marked active, later folio_mark_accessed()
calls will always call workingset_activation(), leading to performance regression.
In addition, folio_mark_accessed() calls __lru_cache_activate_folio(). This function does as
follow comments:
/*
* Search backwards on the optimistic assumption that the folio being
* activated has just been added to this batch.
*/
However, after this patch, folio without lru flag may be in other fbatch too, such as cpu_fbatches.activate.
在 2024/7/4 14:52, yangge1116@....com 写道:
> From: yangge <yangge1116@....com>
>
> If a large number of CMA memory are configured in system (for example, the
> CMA memory accounts for 50% of the system memory), starting a virtual
> virtual machine with device passthrough, it will
> call pin_user_pages_remote(..., FOLL_LONGTERM, ...) to pin memory.
> Normally if a page is present and in CMA area, pin_user_pages_remote()
> will migrate the page from CMA area to non-CMA area because of
> FOLL_LONGTERM flag. But the current code will cause the migration failure
> due to unexpected page refcounts, and eventually cause the virtual machine
> fail to start.
>
> If a page is added in LRU batch, its refcount increases one, remove the
> page from LRU batch decreases one. Page migration requires the page is not
> referenced by others except page mapping. Before migrating a page, we
> should try to drain the page from LRU batch in case the page is in it,
> however, folio_test_lru() is not sufficient to tell whether the page is
> in LRU batch or not, if the page is in LRU batch, the migration will fail.
>
> To solve the problem above, we modify the logic of adding to LRU batch.
> Before adding a page to LRU batch, we clear the LRU flag of the page so
> that we can check whether the page is in LRU batch by folio_test_lru(page).
> It's quite valuable, because likely we don't want to blindly drain the LRU
> batch simply because there is some unexpected reference on a page, as
> described above.
>
> This change makes the LRU flag of a page invisible for longer, which
> may impact some programs. For example, as long as a page is on a LRU
> batch, we cannot isolate it, and we cannot check if it's an LRU page.
> Further, a page can now only be on exactly one LRU batch. This doesn't
> seem to matter much, because a new page is allocated from buddy and
> added to the lru batch, or be isolated, it's LRU flag may also be
> invisible for a long time.
>
> Fixes: 9a4e9f3b2d73 ("mm: update get_user_pages_longterm to migrate pages allocated from CMA region")
> Cc: <stable@...r.kernel.org>
> Signed-off-by: yangge <yangge1116@....com>
> ---
> mm/swap.c | 43 +++++++++++++++++++++++++++++++------------
> 1 file changed, 31 insertions(+), 12 deletions(-)
>
> V4:
> Adjust commit message according to David's comments
> V3:
> Add fixes tag
> V2:
> Adjust code and commit message according to David's comments
>
> diff --git a/mm/swap.c b/mm/swap.c
> index dc205bd..9caf6b0 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -211,10 +211,6 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn)
> for (i = 0; i < folio_batch_count(fbatch); i++) {
> struct folio *folio = fbatch->folios[i];
>
> - /* block memcg migration while the folio moves between lru */
> - if (move_fn != lru_add_fn && !folio_test_clear_lru(folio))
> - continue;
> -
> folio_lruvec_relock_irqsave(folio, &lruvec, &flags);
> move_fn(lruvec, folio);
>
> @@ -255,11 +251,16 @@ static void lru_move_tail_fn(struct lruvec *lruvec, struct folio *folio)
> void folio_rotate_reclaimable(struct folio *folio)
> {
> if (!folio_test_locked(folio) && !folio_test_dirty(folio) &&
> - !folio_test_unevictable(folio) && folio_test_lru(folio)) {
> + !folio_test_unevictable(folio)) {
> struct folio_batch *fbatch;
> unsigned long flags;
>
> folio_get(folio);
> + if (!folio_test_clear_lru(folio)) {
> + folio_put(folio);
> + return;
> + }
> +
> local_lock_irqsave(&lru_rotate.lock, flags);
> fbatch = this_cpu_ptr(&lru_rotate.fbatch);
> folio_batch_add_and_move(fbatch, folio, lru_move_tail_fn);
> @@ -352,11 +353,15 @@ static void folio_activate_drain(int cpu)
>
> void folio_activate(struct folio *folio)
> {
> - if (folio_test_lru(folio) && !folio_test_active(folio) &&
> - !folio_test_unevictable(folio)) {
> + if (!folio_test_active(folio) && !folio_test_unevictable(folio)) {
> struct folio_batch *fbatch;
>
> folio_get(folio);
> + if (!folio_test_clear_lru(folio)) {
> + folio_put(folio);
> + return;
> + }
> +
> local_lock(&cpu_fbatches.lock);
> fbatch = this_cpu_ptr(&cpu_fbatches.activate);
> folio_batch_add_and_move(fbatch, folio, folio_activate_fn);
> @@ -700,6 +705,11 @@ void deactivate_file_folio(struct folio *folio)
> return;
>
> folio_get(folio);
> + if (!folio_test_clear_lru(folio)) {
> + folio_put(folio);
> + return;
> + }
> +
> local_lock(&cpu_fbatches.lock);
> fbatch = this_cpu_ptr(&cpu_fbatches.lru_deactivate_file);
> folio_batch_add_and_move(fbatch, folio, lru_deactivate_file_fn);
> @@ -716,11 +726,16 @@ void deactivate_file_folio(struct folio *folio)
> */
> void folio_deactivate(struct folio *folio)
> {
> - if (folio_test_lru(folio) && !folio_test_unevictable(folio) &&
> - (folio_test_active(folio) || lru_gen_enabled())) {
> + if (!folio_test_unevictable(folio) && (folio_test_active(folio) ||
> + lru_gen_enabled())) {
> struct folio_batch *fbatch;
>
> folio_get(folio);
> + if (!folio_test_clear_lru(folio)) {
> + folio_put(folio);
> + return;
> + }
> +
> local_lock(&cpu_fbatches.lock);
> fbatch = this_cpu_ptr(&cpu_fbatches.lru_deactivate);
> folio_batch_add_and_move(fbatch, folio, lru_deactivate_fn);
> @@ -737,12 +752,16 @@ void folio_deactivate(struct folio *folio)
> */
> void folio_mark_lazyfree(struct folio *folio)
> {
> - if (folio_test_lru(folio) && folio_test_anon(folio) &&
> - folio_test_swapbacked(folio) && !folio_test_swapcache(folio) &&
> - !folio_test_unevictable(folio)) {
> + if (folio_test_anon(folio) && folio_test_swapbacked(folio) &&
> + !folio_test_swapcache(folio) && !folio_test_unevictable(folio)) {
> struct folio_batch *fbatch;
>
> folio_get(folio);
> + if (!folio_test_clear_lru(folio)) {
> + folio_put(folio);
> + return;
> + }
> +
> local_lock(&cpu_fbatches.lock);
> fbatch = this_cpu_ptr(&cpu_fbatches.lru_lazyfree);
> folio_batch_add_and_move(fbatch, folio, lru_lazyfree_fn);
Powered by blists - more mailing lists