lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 8 Nov 2016 08:31:51 +0900 From: Naoya Horiguchi <n-horiguchi@...jp.nec.com> To: linux-mm@...ck.org Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>, Hugh Dickins <hughd@...gle.com>, Andrew Morton <akpm@...ux-foundation.org>, Dave Hansen <dave.hansen@...el.com>, Andrea Arcangeli <aarcange@...hat.com>, Mel Gorman <mgorman@...hsingularity.net>, Michal Hocko <mhocko@...nel.org>, Vlastimil Babka <vbabka@...e.cz>, Pavel Emelyanov <xemul@...allels.com>, Zi Yan <zi.yan@...rutgers.edu>, Balbir Singh <bsingharora@...il.com>, linux-kernel@...r.kernel.org, Naoya Horiguchi <n-horiguchi@...jp.nec.com>, Naoya Horiguchi <nao.horiguchi@...il.com> Subject: [PATCH v2 06/12] mm: thp: enable thp migration in generic path This patch makes it possible to support thp migration gradually. If you fail to allocate a destination page as a thp, you just split the source thp as we do now, and then enter the normal page migration. If you succeed to allocate destination thp, you enter thp migration. Subsequent patches actually enable thp migration for each caller of page migration by allowing its get_new_page() callback to allocate thps. Signed-off-by: Naoya Horiguchi <n-horiguchi@...jp.nec.com> --- mm/migrate.c | 2 +- mm/rmap.c | 7 +++++++ 2 files changed, 8 insertions(+), 1 deletion(-) diff --git v4.9-rc2-mmotm-2016-10-27-18-27/mm/migrate.c v4.9-rc2-mmotm-2016-10-27-18-27_patched/mm/migrate.c index 54f2eb6..97ab8d9 100644 --- v4.9-rc2-mmotm-2016-10-27-18-27/mm/migrate.c +++ v4.9-rc2-mmotm-2016-10-27-18-27_patched/mm/migrate.c @@ -1142,7 +1142,7 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page, goto out; } - if (unlikely(PageTransHuge(page))) { + if (unlikely(PageTransHuge(page) && !PageTransHuge(newpage))) { lock_page(page); rc = split_huge_page(page); unlock_page(page); diff --git v4.9-rc2-mmotm-2016-10-27-18-27/mm/rmap.c v4.9-rc2-mmotm-2016-10-27-18-27_patched/mm/rmap.c index a4be307..a0b665c 100644 --- v4.9-rc2-mmotm-2016-10-27-18-27/mm/rmap.c +++ v4.9-rc2-mmotm-2016-10-27-18-27_patched/mm/rmap.c @@ -1443,6 +1443,13 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma, struct rmap_private *rp = arg; enum ttu_flags flags = rp->flags; + if (flags & TTU_MIGRATION) { + if (!PageHuge(page) && PageTransCompound(page)) { + set_pmd_migration_entry(page, vma, address); + goto out; + } + } + /* munlock has nothing to gain from examining un-locked vmas */ if ((flags & TTU_MUNLOCK) && !(vma->vm_flags & VM_LOCKED)) goto out; -- 2.7.0
Powered by blists - more mailing lists