lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4c37d4e1-e656-48e3-ac80-83c09fe92625@suse.com>
Date: Sun, 1 Feb 2026 07:38:44 +1030
From: Qu Wenruo <wqu@...e.com>
To: JP Kobryn <inwardvessel@...il.com>, boris@....io, clm@...com,
 dsterba@...e.com
Cc: linux-btrfs@...r.kernel.org, stable@...r.kernel.org,
 linux-kernel@...r.kernel.org, kernel-team@...a.com
Subject: Re: [PATCH 6.12] btrfs: prevent use-after-free
 prealloc_file_extent_cluster()



在 2026/2/1 05:23, JP Kobryn 写道:
> Users of filemap_lock_folio() need to guard against the situation where
> release_folio() has been invoked during reclaim but the folio was
> ultimately not removed from the page cache. This patch covers one location
> that was overlooked. Affected code has changed as of 6.17, so this patch is
> only targeting stable trees prior.
> 
> After acquiring the folio, use set_folio_extent_mapped() to ensure the
> folio private state is valid. This is especially important in the subpage
> case, where the private field is an allocated struct containing bitmap and
> lock data.
> 
> Without this protection, the race below is possible:
> 
> [mm] page cache reclaim path        [fs] relocation in subpage mode
> shrink_folio_list()
>    folio_trylock() /* lock acquired */
>    filemap_release_folio()
>      mapping->a_ops->release_folio()
>        btrfs_release_folio()
>          __btrfs_release_folio()
>            clear_folio_extent_mapped()
>              btrfs_detach_folio_state()
>                bfs = folio_detach_private(folio)
>                btrfs_free_folio_state(folio)
>                  kfree(bfs) /* point A */
> 
>                                     prealloc_file_extent_cluster()
>                                       filemap_lock_folio()
>                                         folio_try_get() /* inc refcount */
>                                         folio_lock() /* wait for lock */
> 
>    if (...)
>      ...
>    else if (!mapping || !__remove_mapping(..))
>      /*
>       * __remove_mapping() returns zero when
>       * folio_ref_freeze(folio, refcount) fails /* point B */
>       */
>      goto keep_locked /* folio remains in cache */
> 
> keep_locked:
>    folio_unlock(folio) /* lock released */
> 
>                                     /* lock acquired */
>                                     btrfs_subpage_clear_updodate()
>                                       bfs = folio->priv /* use-after-free */

This patch itself and the root cause look good to me.

Reviewed-by: Qu Wenruo <wqu@...e.com>

> 
> This patch is intended as a minimal fix for backporting to affected
> kernels. As of 6.17, a commit [0] replaced the vulnerable
> filemap_lock_folio() + btrfs_subpage_clear_uptodate() sequence with
> filemap_invalidate_inode() avoiding the race entirely. That commit was part
> of a series with a different goal of preparing for large folio support so
> backporting may not be straight forward.

However I'm not sure if stable tree even accepts non-upstreamed patches.

Thus the stable maintainer may ask you the same question as I did 
before, why not backport the upstream commit 4e346baee95f?

If it's lacking the reason why it's a bug fix, I believe you can modify 
the commit message to include the analyze and the fixes tag.


I'm also curious to learn the proper way for such situation.

Thanks,
Qu

> 
> Signed-off-by: JP Kobryn <inwardvessel@...il.com>
> Fixes: 9d9ea1e68a05 ("btrfs: subpage: fix relocation potentially overwriting last page data")
> 
> [0] 4e346baee95f ("btrfs: reloc: unconditionally invalidate the page cache for each cluster")
> ---
>   fs/btrfs/relocation.c | 14 ++++++++++++++
>   1 file changed, 14 insertions(+)
> 
> diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
> index 0d5a3846811a..040e8f28b200 100644
> --- a/fs/btrfs/relocation.c
> +++ b/fs/btrfs/relocation.c
> @@ -2811,6 +2811,20 @@ static noinline_for_stack int prealloc_file_extent_cluster(struct reloc_control
>   		 * will re-read the whole page anyway.
>   		 */
>   		if (!IS_ERR(folio)) {
> +			/*
> +			 * release_folio() could have cleared the folio private data
> +			 * while we were not holding the lock.
> +			 * Reset the mapping if needed so subpage operations can access
> +			 * a valid private folio state.
> +			 */
> +			ret = set_folio_extent_mapped(folio);
> +			if (ret) {
> +				folio_unlock(folio);
> +				folio_put(folio);
> +
> +				return ret;
> +			}
> +
>   			btrfs_subpage_clear_uptodate(fs_info, folio, i_size,
>   					round_up(i_size, PAGE_SIZE) - i_size);
>   			folio_unlock(folio);


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ