lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ab645ebd-6ba4-fbf4-1e3-5e2a2378d06c@google.com>
Date:   Tue, 8 Jun 2021 13:00:02 -0700 (PDT)
From:   Hugh Dickins <hughd@...gle.com>
To:     Xu Yu <xuyu@...ux.alibaba.com>
cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org, hughd@...gle.com,
        akpm@...ux-foundation.org, gavin.dg@...ux.alibaba.com
Subject: Re: [PATCH v2] mm, thp: use head page in __migration_entry_wait

On Tue, 8 Jun 2021, Xu Yu wrote:

> We notice that hung task happens in a conner but practical scenario when

But I still don't understand what you mean by "conner":
common, corner, something else? Maybe just delete "conner but ".

> CONFIG_PREEMPT_NONE is enabled, as follows.
> 
> Process 0                       Process 1                     Process 2..Inf
> split_huge_page_to_list
>     unmap_page
>         split_huge_pmd_address
>                                 __migration_entry_wait(head)
>                                                               __migration_entry_wait(tail)
>     remap_page (roll back)
>         remove_migration_ptes
>             rmap_walk_anon
>                 cond_resched
> 
> Where __migration_entry_wait(tail) is occurred in kernel space, e.g.,
> copy_to_user in fstat, which will immediately fault again without
> rescheduling, and thus occupy the cpu fully.
> 
> When there are too many processes performing __migration_entry_wait on
> tail page, remap_page will never be done after cond_resched.
> 
> This makes __migration_entry_wait operate on the compound head page,
> thus waits for remap_page to complete, whether the THP is split
> successfully or roll back.
> 
> Note that put_and_wait_on_page_locked helps to drop the page reference
> acquired with get_page_unless_zero, as soon as the page is on the wait
> queue, before actually waiting. So splitting the THP is only prevented
> for a brief interval.
> 
> Fixes: ba98828088ad ("thp: add option to setup migration entries during PMD split")
> Suggested-by: Hugh Dickins <hughd@...gle.com>
> Signed-off-by: Gang Deng <gavin.dg@...ux.alibaba.com>
> Signed-off-by: Xu Yu <xuyu@...ux.alibaba.com>

Thanks:
Acked-by: Hugh Dickins <hughd@...gle.com>

And I hope Andrew will add Cc stable when it goes into his tree.

I'll leave the (independent) discussion of optimal wakeup strategy
to Kirill and Matthew: no strong opinion from me, it works as it is.

> ---
>  mm/migrate.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/mm/migrate.c b/mm/migrate.c
> index b234c3f3acb7..41ff2c9896c4 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -295,6 +295,7 @@ void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep,
>  		goto out;
>  
>  	page = migration_entry_to_page(entry);
> +	page = compound_head(page);
>  
>  	/*
>  	 * Once page cache replacement of page migration started, page_count
> -- 
> 2.20.1.2432.ga663e714
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ