lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cf16a19d-cc3b-4f19-a46f-83554a472368@redhat.com>
Date: Tue, 26 Mar 2024 17:10:43 +0100
From: David Hildenbrand <david@...hat.com>
To: Barry Song <21cnbao@...il.com>, akpm@...ux-foundation.org,
 linux-mm@...ck.org
Cc: chrisl@...nel.org, hanchuanhua@...o.com, hughd@...gle.com,
 linux-kernel@...r.kernel.org, mhocko@...e.com, ryan.roberts@....com,
 shy828301@...il.com, v-songbaohua@...o.com, wangkefeng.wang@...wei.com,
 willy@...radead.org, xiang@...nel.org, ying.huang@...el.com,
 yuzhao@...gle.com
Subject: Re: [PATCH v2] mm: hold PTL from the first PTE while reclaiming a
 large folio

On 06.03.24 10:52, Barry Song wrote:
> From: Barry Song <v-songbaohua@...o.com>
> 
> Within try_to_unmap_one(), page_vma_mapped_walk() races with other
> PTE modifications preceded by pte clear. While iterating over PTEs
> of a large folio, it only starts acquiring PTL from the first valid
> (present) PTE. PTE modifications can temporarily set PTEs to
> pte_none. Consequently, the initial PTEs of a large folio might
> be skipped in try_to_unmap_one().
> For example, for an anon folio, if we skip PTE0, we may have PTE0
> which is still present, while PTE1 ~ PTE(nr_pages - 1) are swap
> entries after try_to_unmap_one().
> So folio will be still mapped, the folio fails to be reclaimed
> and is put back to LRU in this round.
> This also breaks up PTEs optimization such as CONT-PTE on this
> large folio and may lead to accident folio_split() afterwards.
> And since a part of PTEs are now swap entries, accessing those
> parts will introduce overhead - do_swap_page.
> Although the kernel can withstand all of the above issues, the
> situation still seems quite awkward and warrants making it more
> ideal.
> The same race also occurs with small folios, but they have only
> one PTE, thus, it won't be possible for them to be partially
> unmapped.
> This patch holds PTL from PTE0, allowing us to avoid reading PTE
> values that are in the process of being transformed. With stable
> PTE values, we can ensure that this large folio is either
> completely reclaimed or that all PTEs remain untouched in this
> round.
> A corner case is that if we hold PTL from PTE0 and most initial
> PTEs have been really unmapped before that, we may increase the
> duration of holding PTL. Thus we only apply this optimization to
> folios which are still entirely mapped (not in deferred_split
> list).
> 
> Cc: Hugh Dickins <hughd@...gle.com>
> Signed-off-by: Barry Song <v-songbaohua@...o.com>
> ---
>   v2:
>    * Refine commit message and code comment after reading all comments
>      from Ryan and David, thanks!
>    * Avoid increasing the duration of PTL by applying optimization
>      on folios not in deferred_split_list with respect to Ying's
>      comment, thanks!
> 
>   mm/vmscan.c | 12 ++++++++++++
>   1 file changed, 12 insertions(+)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 0b888a2afa58..7106741387e8 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1270,6 +1270,18 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
>   
>   			if (folio_test_pmd_mappable(folio))
>   				flags |= TTU_SPLIT_HUGE_PMD;
> +			/*
> +			 * Without TTU_SYNC, try_to_unmap will only begin to hold PTL
> +			 * from the first present PTE within a large folio. Some initial
> +			 * PTEs might be skipped due to races with parallel PTE writes
> +			 * in which PTEs can be cleared temporarily before being written
> +			 * new present values. This will lead to a large folio is still
> +			 * mapped while some subpages have been partially unmapped after
> +			 * try_to_unmap; TTU_SYNC helps try_to_unmap acquire PTL from the
> +			 * first PTE, eliminating the influence of temporary PTE values.
> +			 */
> +			if (folio_test_large(folio) && list_empty(&folio->_deferred_list))
> +				flags |= TTU_SYNC;
>   
>   			try_to_unmap(folio, flags);
>   			if (folio_mapped(folio)) {

Hopefully this won't have unexpected performance "surprises".

I do wonder if we should really care about the "_deferred_list" 
optimization here, though, I'd just drop it.

In any case

Acked-by: David Hildenbrand <david@...hat.com>

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ