lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 18 Jan 2024 12:54:17 +0100
From: David Hildenbrand <david@...hat.com>
To: Barry Song <21cnbao@...il.com>, ryan.roberts@....com,
 akpm@...ux-foundation.org, linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org, mhocko@...e.com, shy828301@...il.com,
 wangkefeng.wang@...wei.com, willy@...radead.org, xiang@...nel.org,
 ying.huang@...el.com, yuzhao@...gle.com, surenb@...gle.com,
 steven.price@....com, Barry Song <v-songbaohua@...o.com>,
 Chuanhua Han <hanchuanhua@...o.com>
Subject: Re: [PATCH RFC 5/6] mm: rmap: weaken the WARN_ON in
 __folio_add_anon_rmap()

On 18.01.24 12:10, Barry Song wrote:
> From: Barry Song <v-songbaohua@...o.com>
> 
> In do_swap_page(), while supporting large folio swap-in, we are using the helper
> folio_add_anon_rmap_ptes. This is triggerring a WARN_ON in __folio_add_anon_rmap.
> We can make the warning quiet by two ways
> 1. in do_swap_page, we call folio_add_new_anon_rmap() if we are sure the large
> folio is new allocated one; we call folio_add_anon_rmap_ptes() if we find the
> large folio in swapcache.
> 2. we always call folio_add_anon_rmap_ptes() in do_swap_page but weaken the
> WARN_ON in __folio_add_anon_rmap() by letting the WARN_ON less sensitive.
> 
> Option 2 seems to be better for do_swap_page() as it can use unified code for
> all cases.
> 
> Signed-off-by: Barry Song <v-songbaohua@...o.com>
> Tested-by: Chuanhua Han <hanchuanhua@...o.com>
> ---
>   mm/rmap.c | 5 ++++-
>   1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/rmap.c b/mm/rmap.c
> index f5d43edad529..469fcfd32317 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1304,7 +1304,10 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio,
>   		 * page.
>   		 */
>   		VM_WARN_ON_FOLIO(folio_test_large(folio) &&
> -				 level != RMAP_LEVEL_PMD, folio);
> +				 level != RMAP_LEVEL_PMD &&
> +				 (!IS_ALIGNED(address, nr_pages * PAGE_SIZE) ||
> +				 (folio_test_swapcache(folio) && !IS_ALIGNED(folio->index, nr_pages)) ||
> +				 page != &folio->page), folio);
>   		__folio_set_anon(folio, vma, address,
>   				 !!(flags & RMAP_EXCLUSIVE));
>   	} else if (likely(!folio_test_ksm(folio))) {


I have on my todo list to move all that !anon handling out of 
folio_add_anon_rmap_ptes(), and instead make swapin code call add 
folio_add_new_anon_rmap(), where we'll have to pass an exclusive flag 
then (-> whole new folio exclusive).

That's the cleaner approach.

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ