[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5555EAB8.5060401@suse.cz>
Date: Fri, 15 May 2015 14:46:48 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Hugh Dickins <hughd@...gle.com>
CC: Dave Hansen <dave.hansen@...el.com>, Mel Gorman <mgorman@...e.de>,
Rik van Riel <riel@...hat.com>,
Christoph Lameter <cl@...two.org>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Steve Capper <steve.capper@...aro.org>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.cz>,
Jerome Marchand <jmarchan@...hat.com>,
Sasha Levin <sasha.levin@...cle.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCHv5 06/28] mm: handle PTE-mapped tail pages in gerneric
fast gup implementaiton
On 04/23/2015 11:03 PM, Kirill A. Shutemov wrote:
> With new refcounting we are going to see THP tail pages mapped with PTE.
> Generic fast GUP rely on page_cache_get_speculative() to obtain
> reference on page. page_cache_get_speculative() always fails on tail
> pages, because ->_count on tail pages is always zero.
>
> Let's handle tail pages in gup_pte_range().
>
> New split_huge_page() will rely on migration entries to freeze page's
> counts. Recheck PTE value after page_cache_get_speculative() on head
> page should be enough to serialize against split.
>
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> Tested-by: Sasha Levin <sasha.levin@...cle.com>
Acked-by: Vlastimil Babka <vbabka@...e.cz>
> ---
> mm/gup.c | 8 +++++---
> 1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/mm/gup.c b/mm/gup.c
> index ebdb39b3e820..eaeeae15006b 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1051,7 +1051,7 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
> * for an example see gup_get_pte in arch/x86/mm/gup.c
> */
> pte_t pte = READ_ONCE(*ptep);
> - struct page *page;
> + struct page *head, *page;
>
> /*
> * Similar to the PMD case below, NUMA hinting must take slow
> @@ -1063,15 +1063,17 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
>
> VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
> page = pte_page(pte);
> + head = compound_head(page);
>
> - if (!page_cache_get_speculative(page))
> + if (!page_cache_get_speculative(head))
> goto pte_unmap;
>
> if (unlikely(pte_val(pte) != pte_val(*ptep))) {
> - put_page(page);
> + put_page(head);
> goto pte_unmap;
> }
>
> + VM_BUG_ON_PAGE(compound_head(page) != head, page);
> pages[*nr] = page;
> (*nr)++;
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists