lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <6650323f-dbc9-f069-000b-f6b0f941a065@suse.cz>
Date:   Wed, 31 Jul 2019 17:14:16 +0200
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        linux-kernel@...r.kernel.org
Cc:     stable@...r.kernel.org, Jann Horn <jannh@...gle.com>,
        Matthew Wilcox <willy@...radead.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Ben Hutchings <ben.hutchings@...ethink.co.uk>
Subject: Re: [PATCH 4.9 57/83] mm: prevent get_user_pages() from overflowing
 page refcount

On 6/9/19 6:42 PM, Greg Kroah-Hartman wrote:
> From: Linus Torvalds <torvalds@...ux-foundation.org>
> 
> commit 8fde12ca79aff9b5ba951fce1a2641901b8d8e64 upstream.
> 
> If the page refcount wraps around past zero, it will be freed while
> there are still four billion references to it.  One of the possible
> avenues for an attacker to try to make this happen is by doing direct IO
> on a page multiple times.  This patch makes get_user_pages() refuse to
> take a new page reference if there are already more than two billion
> references to the page.
> 
> Reported-by: Jann Horn <jannh@...gle.com>
> Acked-by: Matthew Wilcox <willy@...radead.org>
> Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
> [bwh: Backported to 4.9:
>  - Add the "err" variable in follow_hugetlb_page()
>  - Adjust context]
> Signed-off-by: Ben Hutchings <ben.hutchings@...ethink.co.uk>
> Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
> ---
>  mm/gup.c     |   45 ++++++++++++++++++++++++++++++++++-----------
>  mm/hugetlb.c |   16 +++++++++++++++-
>  2 files changed, 49 insertions(+), 12 deletions(-)
> 

...

> @@ -1231,6 +1240,20 @@ struct page *get_dump_page(unsigned long
>   */
>  #ifdef CONFIG_HAVE_GENERIC_RCU_GUP
>  
> +/*
> + * Return the compund head page with ref appropriately incremented,
> + * or NULL if that failed.
> + */
> +static inline struct page *try_get_compound_head(struct page *page, int refs)
> +{
> +	struct page *head = compound_head(page);
> +	if (WARN_ON_ONCE(page_ref_count(head) < 0))
> +		return NULL;
> +	if (unlikely(!page_cache_add_speculative(head, refs)))
> +		return NULL;
> +	return head;
> +}
> +
>  #ifdef __HAVE_ARCH_PTE_SPECIAL
>  static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
>  			 int write, struct page **pages, int *nr)
> @@ -1263,9 +1286,9 @@ static int gup_pte_range(pmd_t pmd, unsi
>  
>  		VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
>  		page = pte_page(pte);
> -		head = compound_head(page);
>  
> -		if (!page_cache_get_speculative(head))
> +		head = try_get_compound_head(page, 1);

BTW, several arches in 4.9, including x86, have arch-specific fast gup
implementation, which is not touched by this backport. Didn't check if
Jann's exploit ends up using the fast on non-fast one, though.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ