[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150622113525.GE7934@node.dhcp.inet.fi>
Date: Mon, 22 Jun 2015 14:35:25 +0300
From: "Kirill A. Shutemov" <kirill@...temov.name>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Hugh Dickins <hughd@...gle.com>,
Dave Hansen <dave.hansen@...el.com>,
Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>,
Christoph Lameter <cl@...two.org>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Steve Capper <steve.capper@...aro.org>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.cz>,
Jerome Marchand <jmarchan@...hat.com>,
Sasha Levin <sasha.levin@...cle.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCHv6 33/36] migrate_pages: try to split pages on qeueuing
On Thu, Jun 11, 2015 at 11:27:19AM +0200, Vlastimil Babka wrote:
> On 06/03/2015 07:06 PM, Kirill A. Shutemov wrote:
> >We are not able to migrate THPs. It means it's not enough to split only
> >PMD on migration -- we need to split compound page under it too.
> >
> >Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> >---
> > mm/mempolicy.c | 37 +++++++++++++++++++++++++++++++++----
> > 1 file changed, 33 insertions(+), 4 deletions(-)
> >
> >diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> >index 528f6c467cf1..0b1499c2f890 100644
> >--- a/mm/mempolicy.c
> >+++ b/mm/mempolicy.c
> >@@ -489,14 +489,31 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
> > struct page *page;
> > struct queue_pages *qp = walk->private;
> > unsigned long flags = qp->flags;
> >- int nid;
> >+ int nid, ret;
> > pte_t *pte;
> > spinlock_t *ptl;
> >
> >- split_huge_pmd(vma, pmd, addr);
> >- if (pmd_trans_unstable(pmd))
> >- return 0;
> >+ if (pmd_trans_huge(*pmd)) {
> >+ ptl = pmd_lock(walk->mm, pmd);
> >+ if (pmd_trans_huge(*pmd)) {
> >+ page = pmd_page(*pmd);
> >+ if (is_huge_zero_page(page)) {
> >+ spin_unlock(ptl);
> >+ split_huge_pmd(vma, pmd, addr);
> >+ } else {
> >+ get_page(page);
> >+ spin_unlock(ptl);
> >+ lock_page(page);
> >+ ret = split_huge_page(page);
> >+ unlock_page(page);
> >+ put_page(page);
> >+ if (ret)
> >+ return 0;
> >+ }
> >+ }
> >+ }
> >
> >+retry:
> > pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
> > for (; addr != end; pte++, addr += PAGE_SIZE) {
> > if (!pte_present(*pte))
> >@@ -513,6 +530,18 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
> > nid = page_to_nid(page);
> > if (node_isset(nid, *qp->nmask) == !!(flags & MPOL_MF_INVERT))
> > continue;
> >+ if (PageTail(page) && PageAnon(page)) {
>
> Hm, can it really happen that we stumble upon THP tail page here, without
> first stumbling upon it in the previous hunk above? If so, when?
The first hunk catch PMD-mapped THP and here we deal with PTE-mapped.
The scenario: fault in a THP, split PMD (not page) e.g. with mprotect()
and then try to migrate.
--
Kirill A. Shutemov
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists