[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAMgjq7AWZg0Y7+v3_Z8-YVUXrANB29mCDSyzF39dtAM_TQ0aKw@mail.gmail.com>
Date: Sat, 1 Feb 2025 16:01:43 +0800
From: Kairui Song <ryncsn@...il.com>
To: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>, Jens Axboe <axboe@...nel.dk>,
"Jason A. Donenfeld" <Jason@...c4.com>, Andi Shyti <andi.shyti@...ux.intel.com>,
Chengming Zhou <chengming.zhou@...ux.dev>, Christian Brauner <brauner@...nel.org>,
Christophe Leroy <christophe.leroy@...roup.eu>, Dan Carpenter <dan.carpenter@...aro.org>,
David Airlie <airlied@...il.com>, David Hildenbrand <david@...hat.com>, Hao Ge <gehao@...inos.cn>,
Jani Nikula <jani.nikula@...ux.intel.com>, Johannes Weiner <hannes@...xchg.org>,
Joonas Lahtinen <joonas.lahtinen@...ux.intel.com>, Josef Bacik <josef@...icpanda.com>,
Masami Hiramatsu <mhiramat@...nel.org>, Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Miklos Szeredi <miklos@...redi.hu>, Nhat Pham <nphamcs@...il.com>,
Oscar Salvador <osalvador@...e.de>, Ran Xiaokai <ran.xiaokai@....com.cn>,
Rodrigo Vivi <rodrigo.vivi@...el.com>, Simona Vetter <simona@...ll.ch>,
Steven Rostedt <rostedt@...dmis.org>, Tvrtko Ursulin <tursulin@...ulin.net>,
Vlastimil Babka <vbabka@...e.cz>, Yosry Ahmed <yosryahmed@...gle.com>, Yu Zhao <yuzhao@...gle.com>,
intel-gfx@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org, linux-trace-kernel@...r.kernel.org
Subject: Re: [PATCHv3 06/11] mm/vmscan: Use PG_dropbehind instead of PG_reclaim
On Thu, Jan 30, 2025 at 6:02 PM Kirill A. Shutemov
<kirill.shutemov@...ux.intel.com> wrote:
>
> The recently introduced PG_dropbehind allows for freeing folios
> immediately after writeback. Unlike PG_reclaim, it does not need vmscan
> to be involved to get the folio freed.
>
> Instead of using folio_set_reclaim(), use folio_set_dropbehind() in
> pageout().
>
> It is safe to leave PG_dropbehind on the folio if, for some reason
> (bug?), the folio is not in a writeback state after ->writepage().
> In these cases, the kernel had to clear PG_reclaim as it shared a page
> flag bit with PG_readahead.
>
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> Acked-by: David Hildenbrand <david@...hat.com>
> ---
> mm/vmscan.c | 9 +++------
> 1 file changed, 3 insertions(+), 6 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index bc1826020159..c97adb0fdaa4 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -692,19 +692,16 @@ static pageout_t pageout(struct folio *folio, struct address_space *mapping,
> if (shmem_mapping(mapping) && folio_test_large(folio))
> wbc.list = folio_list;
>
> - folio_set_reclaim(folio);
> + folio_set_dropbehind(folio);
> +
> res = mapping->a_ops->writepage(&folio->page, &wbc);
> if (res < 0)
> handle_write_error(mapping, folio, res);
> if (res == AOP_WRITEPAGE_ACTIVATE) {
> - folio_clear_reclaim(folio);
> + folio_clear_dropbehind(folio);
> return PAGE_ACTIVATE;
> }
>
> - if (!folio_test_writeback(folio)) {
> - /* synchronous write or broken a_ops? */
> - folio_clear_reclaim(folio);
> - }
> trace_mm_vmscan_write_folio(folio);
> node_stat_add_folio(folio, NR_VMSCAN_WRITE);
> return PAGE_SUCCESS;
> --
> 2.47.2
>
Hi, I'm seeing following panic with SWAP after this commit:
[ 29.672319] Oops: general protection fault, probably for
non-canonical address 0xffff88909a3be3: 0000 [#1] PREEMPT SMP NOPTI
[ 29.675503] CPU: 82 UID: 0 PID: 5145 Comm: tar Kdump: loaded Not
tainted 6.13.0.ptch-g1fe9ea48ec98 #917
[ 29.677508] Hardware name: Red Hat KVM/RHEL-AV, BIOS 0.0.0 02/06/2015
[ 29.678886] RIP: 0010:__lock_acquire+0x20/0x15d0
[ 29.679891] Code: 90 90 90 90 90 90 90 90 90 90 41 57 41 56 41 55
41 54 55 53 48 83 ec 30 8b 2d 10 ac f3 01 44 8b ac 24 88 00 00 00 85
ed 74 64 <48> 8b 07 49 89 ff 48 3d 20 1d bf 83 74 56 8b 1d 8c f5 b1 01
41 89
[ 29.683852] RSP: 0018:ffffc9000bea3148 EFLAGS: 00010002
[ 29.684980] RAX: ffff8890874b2940 RBX: 0000000000000200 RCX: 0000000000000000
[ 29.686510] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00ffff88909a3be3
[ 29.688031] RBP: 0000000000000001 R08: 0000000000000001 R09: 0000000000000000
[ 29.689561] R10: 0000000000000000 R11: 0000000000000020 R12: 00ffff88909a3be3
[ 29.691087] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[ 29.692613] FS: 00007fa05c2824c0(0000) GS:ffff88a03fa80000(0000)
knlGS:0000000000000000
[ 29.694339] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 29.695581] CR2: 000055f9abb7fc7d CR3: 00000010932f2002 CR4: 0000000000770eb0
[ 29.697109] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 29.698637] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 29.700161] PKRU: 55555554
[ 29.700759] Call Trace:
[ 29.701296] <TASK>
[ 29.701770] ? __die_body+0x1e/0x60
[ 29.702540] ? die_addr+0x3c/0x60
[ 29.703267] ? exc_general_protection+0x18f/0x3c0
[ 29.704290] ? asm_exc_general_protection+0x26/0x30
[ 29.705345] ? __lock_acquire+0x20/0x15d0
[ 29.706215] ? lockdep_hardirqs_on_prepare+0xda/0x190
[ 29.707304] ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
[ 29.708452] lock_acquire+0xbf/0x2e0
[ 29.709229] ? folio_unmap_invalidate+0x12f/0x220
[ 29.710257] ? __folio_end_writeback+0x15d/0x430
[ 29.711260] ? __folio_end_writeback+0x116/0x430
[ 29.712261] _raw_spin_lock+0x30/0x40
[ 29.713064] ? folio_unmap_invalidate+0x12f/0x220
[ 29.714076] folio_unmap_invalidate+0x12f/0x220
[ 29.715058] folio_end_writeback+0xdf/0x190
[ 29.715967] swap_writepage_bdev_sync+0x1e0/0x450
[ 29.716994] ? __pfx_submit_bio_wait_endio+0x10/0x10
[ 29.718074] swap_writepage+0x46b/0x6b0
[ 29.718917] pageout+0x14b/0x360
[ 29.719628] shrink_folio_list+0x67d/0xec0
[ 29.720519] ? mark_held_locks+0x48/0x80
[ 29.721375] evict_folios+0x2a7/0x9e0
[ 29.722179] try_to_shrink_lruvec+0x19a/0x270
[ 29.723130] lru_gen_shrink_lruvec+0x70/0xc0
[ 29.724060] ? __lock_acquire+0x558/0x15d0
[ 29.724954] shrink_lruvec+0x57/0x780
[ 29.725754] ? find_held_lock+0x2d/0xa0
[ 29.726588] ? rcu_read_unlock+0x17/0x60
[ 29.727449] shrink_node+0x2ad/0x930
[ 29.728229] do_try_to_free_pages+0xbd/0x4e0
[ 29.729160] try_to_free_mem_cgroup_pages+0x123/0x2c0
[ 29.730252] try_charge_memcg+0x222/0x660
[ 29.731128] charge_memcg+0x3c/0x80
[ 29.731888] __mem_cgroup_charge+0x30/0x70
[ 29.732776] shmem_alloc_and_add_folio+0x1a5/0x480
[ 29.733818] ? filemap_get_entry+0x155/0x390
[ 29.734748] shmem_get_folio_gfp+0x28c/0x6c0
[ 29.735680] shmem_write_begin+0x5a/0xc0
[ 29.736535] generic_perform_write+0x12a/0x2e0
[ 29.737503] shmem_file_write_iter+0x86/0x90
[ 29.738428] vfs_write+0x364/0x530
[ 29.739180] ksys_write+0x6c/0xe0
[ 29.739906] do_syscall_64+0x66/0x140
[ 29.740713] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 29.741800] RIP: 0033:0x7fa05c439984
[ 29.742584] Code: c7 00 16 00 00 00 b8 ff ff ff ff c3 66 2e 0f 1f
84 00 00 00 00 00 f3 0f 1e fa 80 3d c5 06 0e 00 00 74 13 b8 01 00 00
00 0f 05 <48> 3d 00 f0 ff ff 77 54 c3 0f 1f 00 55 48 89 e5 48 83 ec 20
48 89
[ 29.746542] RSP: 002b:00007ffece7720f8 EFLAGS: 00000202 ORIG_RAX:
0000000000000001
[ 29.748157] RAX: ffffffffffffffda RBX: 0000000000002800 RCX: 00007fa05c439984
[ 29.749682] RDX: 0000000000002800 RSI: 000055f9cfa08000 RDI: 0000000000000004
[ 29.751216] RBP: 00007ffece772140 R08: 0000000000002800 R09: 0000000000000007
[ 29.752743] R10: 0000000000000180 R11: 0000000000000202 R12: 000055f9cfa08000
[ 29.754262] R13: 0000000000000004 R14: 0000000000002800 R15: 00000000000009af
[ 29.755797] </TASK>
[ 29.756285] Modules linked in: zram virtiofs
I'm testing with PROVE_LOCKING on. It seems folio_unmap_invalidate is
called for swapcache folio and it doesn't work well, following PATCH
on top of mm-unstable seems fix it well:
diff --git a/mm/filemap.c b/mm/filemap.c
index 4fe551037bf7..98493443d120 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1605,8 +1605,9 @@ static void folio_end_reclaim_write(struct folio *folio)
* invalidation in that case.
*/
if (in_task() && folio_trylock(folio)) {
- if (folio->mapping)
- folio_unmap_invalidate(folio->mapping, folio, 0);
+ struct address_space *mapping = folio_mapping(folio);
+ if (mapping)
+ folio_unmap_invalidate(mapping, folio, 0);
folio_unlock(folio);
}
}
diff --git a/mm/truncate.c b/mm/truncate.c
index e922ceb66c44..4f3e34c52d8b 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -565,23 +565,29 @@ int folio_unmap_invalidate(struct address_space
*mapping, struct folio *folio,
if (!filemap_release_folio(folio, gfp))
return -EBUSY;
- spin_lock(&mapping->host->i_lock);
+ if (!folio_test_swapcache(folio)) {
+ spin_lock(&mapping->host->i_lock);
+ BUG_ON(folio_has_private(folio));
+ }
+
xa_lock_irq(&mapping->i_pages);
if (folio_test_dirty(folio))
goto failed;
- BUG_ON(folio_has_private(folio));
__filemap_remove_folio(folio, NULL);
xa_unlock_irq(&mapping->i_pages);
if (mapping_shrinkable(mapping))
inode_add_lru(mapping->host);
- spin_unlock(&mapping->host->i_lock);
+
+ if (!folio_test_swapcache(folio))
+ spin_unlock(&mapping->host->i_lock);
filemap_free_folio(mapping, folio);
return 1;
failed:
xa_unlock_irq(&mapping->i_pages);
- spin_unlock(&mapping->host->i_lock);
+ if (!folio_test_swapcache(folio))
+ spin_unlock(&mapping->host->i_lock);
return -EBUSY;
}
Powered by blists - more mailing lists