lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 6 Oct 2015 18:24:00 +0300 From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com> To: Andrew Morton <akpm@...ux-foundation.org>, Andrea Arcangeli <aarcange@...hat.com>, Hugh Dickins <hughd@...gle.com> Cc: Dave Hansen <dave.hansen@...el.com>, Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>, Vlastimil Babka <vbabka@...e.cz>, Christoph Lameter <cl@...two.org>, Naoya Horiguchi <n-horiguchi@...jp.nec.com>, Steve Capper <steve.capper@...aro.org>, "Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>, Johannes Weiner <hannes@...xchg.org>, Michal Hocko <mhocko@...e.cz>, Jerome Marchand <jmarchan@...hat.com>, Sasha Levin <sasha.levin@...cle.com>, linux-kernel@...r.kernel.org, linux-mm@...ck.org, "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com> Subject: [PATCHv12 33/37] migrate_pages: try to split pages on qeueuing We are not able to migrate THPs. It means it's not enough to split only PMD on migration -- we need to split compound page under it too. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com> Tested-by: Aneesh Kumar K.V <aneesh.kumar@...ux.vnet.ibm.com> Acked-by: Jerome Marchand <jmarchan@...hat.com> --- mm/mempolicy.c | 40 ++++++++++++++++++++++++++++++++++++---- 1 file changed, 36 insertions(+), 4 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 8102f30a3895..4a02de4e173f 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -489,14 +489,31 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr, struct page *page; struct queue_pages *qp = walk->private; unsigned long flags = qp->flags; - int nid; + int nid, ret; pte_t *pte; spinlock_t *ptl; - split_huge_pmd(vma, pmd, addr); - if (pmd_trans_unstable(pmd)) - return 0; + if (pmd_trans_huge(*pmd)) { + ptl = pmd_lock(walk->mm, pmd); + if (pmd_trans_huge(*pmd)) { + page = pmd_page(*pmd); + if (is_huge_zero_page(page)) { + spin_unlock(ptl); + split_huge_pmd(vma, pmd, addr); + } else { + get_page(page); + spin_unlock(ptl); + lock_page(page); + ret = split_huge_page(page); + unlock_page(page); + put_page(page); + if (ret) + return 0; + } + } + } +retry: pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); for (; addr != end; pte++, addr += PAGE_SIZE) { if (!pte_present(*pte)) @@ -513,6 +530,21 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr, nid = page_to_nid(page); if (node_isset(nid, *qp->nmask) == !!(flags & MPOL_MF_INVERT)) continue; + if (PageTail(page) && PageAnon(page)) { + get_page(page); + pte_unmap_unlock(pte, ptl); + lock_page(page); + ret = split_huge_page(page); + unlock_page(page); + put_page(page); + /* Failed to split -- skip. */ + if (ret) { + pte = pte_offset_map_lock(walk->mm, pmd, + addr, &ptl); + continue; + } + goto retry; + } if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) migrate_page_add(page, qp->pagelist, flags); -- 2.5.3 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists