[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMgjq7C7dxGqPu4=yLCrKe1vATemmXEgH6e-XyF+iQSSBYdiHA@mail.gmail.com>
Date: Tue, 29 Apr 2025 02:54:31 +0800
From: Kairui Song <ryncsn@...il.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>, Hugh Dickins <hughd@...gle.com>, Chris Li <chrisl@...nel.org>,
Yosry Ahmed <yosryahmed@...gle.com>, "Huang, Ying" <ying.huang@...ux.alibaba.com>,
Nhat Pham <nphamcs@...il.com>, Johannes Weiner <hannes@...xchg.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/6] filemap: do not use folio_contains for swap cache folios
On Mon, Apr 28, 2025 at 10:58 AM Kairui Song <ryncsn@...il.com> wrote:
>
> On Mon, Apr 28, 2025 at 8:44 AM Matthew Wilcox <willy@...radead.org> wrote:
> >
> > On Mon, Apr 28, 2025 at 02:59:06AM +0800, Kairui Song wrote:
> > > For filemap and truncate, folio_contains is only used for sanity checks
> > > to verify the folio index matches the expected lookup/invalidation target.
> > > The swap cache does not utilize filemap or truncate helpers in ways that
> > > would trigger these checks, as it mostly implements its own cache management.
> > >
> > > Shmem won't interact with these sanity checks either unless thing went
> > > wrong, it would directly trigger a BUG, because swap cache index are
> > > unrelated to shmem index, and would almost certainly mismatch (unless
> > > on collide).
> >
> > It does happen though. If shmem is writing the folio to swap at the
> > same time that the file containing the folio is being truncated, we
> > can hit this.
>
> Thanks for the info! I didn't check it in detail because that would
> likley trigger a BUG_ON but so far I didn't see any BUG_ON commit from
> there.
>
> Just checked there are two users in truncate:
>
> One will lock the folio and check if `folio->mapping != mapping`
> first, on swapout shmem removes the folio from shmem mapping so this
> check will skip the folio_contains check.
>
> Another seems might hit the check, the time window is extremely tiny
> though, only if the truncate's `xa_is_value(folio)` check passed while
> another CPU is running between `folio_alloc_swap` and
> `shmem_delete_from_page_cache` in shmem_writepage, then the next
> VM_BUG_ON_FOLIO(!folio_contains) will fail as folio is now a
> swap cache, not shmem folio anymore. Let me double check if this needs
> another fix.
Checking all the code path, shmem managed to avoid all possible ways
to call into truncate_inode_pages_range, this is the only function
that seemed may call folio_contains with a swap cache folio.
(except tiny-shmem, it uses this function directly for truncate,
we can ignore that as it's basically just ramfs).
For truncate, shmem need to either zap a whole (large) swap/folio entry,
or zero part of folio, or swapin a large folio so that part of it can be zero'ed
(using shmem_get_partial_folio), the swapin part is a bit special so
calling generic truncate helpers might cause unexpected behaviour.
Similar story for filemap lookup.
So shmem won't call into the truncate helper here that may risk
calling folio_contains with swap cache.
Even if somehow it does, this commit won't change the BUG_ON
behaviour, except now it tells the user the folio shouldn't be a swap cache
at all, instead of reporting a buggy index. So I think this commit should
be good to have to make the swap cache less exposed.
---
List of potential call chains that may call into the truncate helper
here, and not initialized from other FS / block, none will be used by
shmem.
do_dentry_open
/* filemap_nr_thps always 0 for shmem */
truncate_inode_pages
truncate_inode_pages_range
dquot_disable /* No quota file for shmem */
truncate_inode_pages
truncate_inode_pages_range
dquot_quota_sync /* No quota file for shmem */
truncate_inode_pages
truncate_inode_pages_range
truncate_inode_pages_final /* Override by shmem_evict_inode */
truncate_inode_pages
truncate_inode_pages_range
simple_setattr /* Override by shmem_setattr */
truncate_setsize
truncate_pagecache
truncate_inode_pages
truncate_inode_pages_range
put_aio_ring_file /* AIO calls it for private file */
truncate_setsize
truncate_pagecache
truncate_inode_pages
truncate_inode_pages_range
truncate_pagecache /* No user except other fs */
truncate_inode_pages
truncate_inode_pages_range
truncate_pagecache_range /* No user except other fs */
truncate_inode_pages_range
---
invalidate_inode_pages2 /* No user except other fs */
invalidate_inode_pages2_range
filemap_invalidate_pages /* only used by block / direct io */
invalidate_inode_pages2_range
filemap_invalidate_inode /* No user except other fs */
invalidate_inode_pages2_range
kiocb_invalidate_post_direct_write /* only used by block / direct io */
invalidate_inode_pages2_range
Powered by blists - more mailing lists