lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190605144912.f0059d4bd13c563ddb37877e@linux-foundation.org>
Date:   Wed, 5 Jun 2019 14:49:12 -0700
From:   Andrew Morton <akpm@...ux-foundation.org>
To:     Pingfan Liu <kernelfans@...il.com>
Cc:     linux-mm@...ck.org, Ira Weiny <ira.weiny@...el.com>,
        Mike Rapoport <rppt@...ux.ibm.com>,
        Dan Williams <dan.j.williams@...el.com>,
        Matthew Wilcox <willy@...radead.org>,
        John Hubbard <jhubbard@...dia.com>,
        "Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
        Keith Busch <keith.busch@...el.com>,
        Christoph Hellwig <hch@...radead.org>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCHv3 1/2] mm/gup: fix omission of check on FOLL_LONGTERM in
 get_user_pages_fast()

On Wed,  5 Jun 2019 17:10:19 +0800 Pingfan Liu <kernelfans@...il.com> wrote:

> As for FOLL_LONGTERM, it is checked in the slow path
> __gup_longterm_unlocked(). But it is not checked in the fast path, which
> means a possible leak of CMA page to longterm pinned requirement through
> this crack.
> 
> Place a check in the fast path.

I'm not actually seeing a description (in either the existing code or
this changelog or patch) an explanation of *why* we wish to exclude CMA
pages from longterm pinning.

> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -2196,6 +2196,26 @@ static int __gup_longterm_unlocked(unsigned long start, int nr_pages,
>  	return ret;
>  }
>  
> +#ifdef CONFIG_CMA
> +static inline int reject_cma_pages(int nr_pinned, struct page **pages)
> +{
> +	int i;
> +
> +	for (i = 0; i < nr_pinned; i++)
> +		if (is_migrate_cma_page(pages[i])) {
> +			put_user_pages(pages + i, nr_pinned - i);
> +			return i;
> +		}
> +
> +	return nr_pinned;
> +}

There's no point in inlining this.

The code seems inefficient.  If it encounters a single CMA page it can
end up discarding a possibly significant number of non-CMA pages.  I
guess that doesn't matter much, as get_user_pages(FOLL_LONGTERM) is
rare.  But could we avoid this (and the second pass across pages[]) by
checking for a CMA page within gup_pte_range()?

> +#else
> +static inline int reject_cma_pages(int nr_pinned, struct page **pages)
> +{
> +	return nr_pinned;
> +}
> +#endif
> +
>  /**
>   * get_user_pages_fast() - pin user pages in memory
>   * @start:	starting user address
> @@ -2236,6 +2256,9 @@ int get_user_pages_fast(unsigned long start, int nr_pages,
>  		ret = nr;
>  	}
>  
> +	if (unlikely(gup_flags & FOLL_LONGTERM) && nr)
> +		nr = reject_cma_pages(nr, pages);
> +

This would be a suitable place to add a comment explaining why we're
doing this...

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ