lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <519D18AF-DD01-4107-96D2-D8E33058B2CB@vmware.com>
Date:   Tue, 3 Dec 2019 12:25:24 +0000
From:   Ajay Kaher <akaher@...are.com>
To:     Vlastimil Babka <vbabka@...e.cz>,
        "stable@...r.kernel.org" <stable@...r.kernel.org>
CC:     "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Jann Horn <jannh@...gle.com>,
        Matthew Wilcox <willy@...radead.org>,
        "stable@...nel.org" <stable@...nel.org>,
        Srivatsa Bhat <srivatsab@...are.com>,
        "srivatsa@...il.mit.edu" <srivatsa@...il.mit.edu>,
        Vasavi Sirnapalli <vsirnapalli@...are.com>
Subject: Re: [PATCH STABLE 4.4 5/8] mm: prevent get_user_pages() from
 overflowing page refcount



On 08/11/19, 3:08 PM, "Vlastimil Babka" <vbabka@...e.cz> wrote:

> From: Linus Torvalds <torvalds@...ux-foundation.org>
> 
> commit 8fde12ca79aff9b5ba951fce1a2641901b8d8e64 upstream.
>    
> [ 4.4 backport: there's get_page_foll(), so add try_get_page()-like checks
>                 in there, enabled by a new parameter, which is false where
>                 upstream patch doesn't replace get_page() with try_get_page()
>                 (the THP and hugetlb callers).

Could we have try_get_page_foll(), as in:
https://lore.kernel.org/stable/1570581863-12090-3-git-send-email-akaher@vmware.com/

+ Code will be in sync as we have try_get_page()
+ No need to add extra argument to try_get_page()
+ No need to modify the callers of try_get_page()

>		In gup_pte_range(), we don't expect tail pages, so just check
>                 page ref count instead of try_get_compound_head()

Technically it's fine. If you want to keep the code of stable versions in sync
with latest versions then this could be done in following ways (without any
modification in upstream patch for gup_pte_range()):

Apply 7aef4172c7957d7e65fc172be4c99becaef855d4 before applying
8fde12ca79aff9b5ba951fce1a2641901b8d8e64, as done here:
https://lore.kernel.org/stable/1570581863-12090-4-git-send-email-akaher@vmware.com/

> 		Also patch arch-specific variants of gup.c for x86 and s390,
> 		leaving mips, sh, sparc alone				      ]
> 
    
> ---
>  arch/s390/mm/gup.c |  6 ++++--
>  arch/x86/mm/gup.c  |  9 ++++++++-
>  mm/gup.c           | 39 +++++++++++++++++++++++++++++++--------
>  mm/huge_memory.c   |  2 +-
>  mm/hugetlb.c       | 18 ++++++++++++++++--
>  mm/internal.h      | 12 +++++++++---
>  6 files changed, 69 insertions(+), 17 deletions(-)
>    
> #ifdef __HAVE_ARCH_PTE_SPECIAL
>  static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
>  			 int write, struct page **pages, int *nr)
> @@ -1083,6 +1103,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
>  		VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
> 		page = pte_page(pte);
>  
> +		if (WARN_ON_ONCE(page_ref_count(page) < 0))
> +			goto pte_unmap;
> +
>  		if (!page_cache_get_speculative(page))
>  			goto pte_unmap;


 
> diff --git a/mm/internal.h b/mm/internal.h
> index a6639c72780a..b52041969d06 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -93,23 +93,29 @@ static inline void __get_page_tail_foll(struct page *page,
>   * follow_page() and it must be called while holding the proper PT
>   * lock while the pte (or pmd_trans_huge) is still mapping the page.
>   */
> -static inline void get_page_foll(struct page *page)
> +static inline bool get_page_foll(struct page *page, bool check)
>  {
> -	if (unlikely(PageTail(page)))
> +	if (unlikely(PageTail(page))) {
>  		/*
>  		 * This is safe only because
>  		 * __split_huge_page_refcount() can't run under
>  		 * get_page_foll() because we hold the proper PT lock.
>  		 */
> +		if (check && WARN_ON_ONCE(
> +				page_ref_count(compound_head(page)) <= 0))
> +			return false;
>  		__get_page_tail_foll(page, true);
> -	else {
> +	} else {
>  		/*
>  		 * Getting a normal page or the head of a compound page
>  		 * requires to already have an elevated page->_count.
>  		 */
>  		VM_BUG_ON_PAGE(page_ref_zero_or_close_to_overflow(page), page);
> +		if (check && WARN_ON_ONCE(page_ref_count(page) <= 0))
> +			return false;
>  		atomic_inc(&page->_count);
>  	}
> +	return true;
>  }
    
    

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ