lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <20180417020230.26412-16-ying.huang@intel.com> Date: Tue, 17 Apr 2018 10:02:24 +0800 From: "Huang, Ying" <ying.huang@...el.com> To: Andrew Morton <akpm@...ux-foundation.org> Cc: Tim Chen <tim.c.chen@...el.com>, Andi Kleen <ak@...ux.intel.com>, linux-mm@...ck.org, linux-kernel@...r.kernel.org, Huang Ying <ying.huang@...el.com>, "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>, Andrea Arcangeli <aarcange@...hat.com>, Michal Hocko <mhocko@...e.com>, Johannes Weiner <hannes@...xchg.org>, Shaohua Li <shli@...nel.org>, Hugh Dickins <hughd@...gle.com>, Minchan Kim <minchan@...nel.org>, Rik van Riel <riel@...hat.com>, Dave Hansen <dave.hansen@...ux.intel.com>, Naoya Horiguchi <n-horiguchi@...jp.nec.com>, Zi Yan <zi.yan@...rutgers.edu> Subject: [PATCH -mm 15/21] mm, THP, swap: Support to copy PMD swap mapping when fork() From: Huang Ying <ying.huang@...el.com> During fork, the page table need to be copied from parent to child. A PMD swap mapping need to be copied too and the swap reference count need to be increased. When the huge swap cluster has been split already, we need to split the PMD swap mapping and fallback to PTE copying. When swap count continuation failed to allocate a page with GFP_ATOMIC, we need to unlock the spinlock and try again with GFP_KERNEL. Signed-off-by: "Huang, Ying" <ying.huang@...el.com> Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com> Cc: Andrea Arcangeli <aarcange@...hat.com> Cc: Michal Hocko <mhocko@...e.com> Cc: Johannes Weiner <hannes@...xchg.org> Cc: Shaohua Li <shli@...nel.org> Cc: Hugh Dickins <hughd@...gle.com> Cc: Minchan Kim <minchan@...nel.org> Cc: Rik van Riel <riel@...hat.com> Cc: Dave Hansen <dave.hansen@...ux.intel.com> Cc: Naoya Horiguchi <n-horiguchi@...jp.nec.com> Cc: Zi Yan <zi.yan@...rutgers.edu> --- mm/huge_memory.c | 72 ++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 57 insertions(+), 15 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 1d3309c61136..b0a9a4afc129 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -944,6 +944,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, if (unlikely(!pgtable)) goto out; +retry: dst_ptl = pmd_lock(dst_mm, dst_pmd); src_ptl = pmd_lockptr(src_mm, src_pmd); spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); @@ -951,26 +952,67 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, ret = -EAGAIN; pmd = *src_pmd; -#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION if (unlikely(is_swap_pmd(pmd))) { swp_entry_t entry = pmd_to_swp_entry(pmd); - VM_BUG_ON(!is_pmd_migration_entry(pmd)); - if (is_write_migration_entry(entry)) { - make_migration_entry_read(&entry); - pmd = swp_entry_to_pmd(entry); - if (pmd_swp_soft_dirty(*src_pmd)) - pmd = pmd_swp_mksoft_dirty(pmd); - set_pmd_at(src_mm, addr, src_pmd, pmd); +#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION + if (is_migration_entry(entry)) { + if (is_write_migration_entry(entry)) { + make_migration_entry_read(&entry); + pmd = swp_entry_to_pmd(entry); + if (pmd_swp_soft_dirty(*src_pmd)) + pmd = pmd_swp_mksoft_dirty(pmd); + set_pmd_at(src_mm, addr, src_pmd, pmd); + } + add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); + mm_inc_nr_ptes(dst_mm); + pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable); + set_pmd_at(dst_mm, addr, dst_pmd, pmd); + ret = 0; + goto out_unlock; } - add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); - mm_inc_nr_ptes(dst_mm); - pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable); - set_pmd_at(dst_mm, addr, dst_pmd, pmd); - ret = 0; - goto out_unlock; - } #endif + if (thp_swap_supported() && !non_swap_entry(entry)) { + ret = swap_duplicate(&entry, true); + if (!ret) { + add_mm_counter(dst_mm, MM_SWAPENTS, + HPAGE_PMD_NR); + mm_inc_nr_ptes(dst_mm); + pgtable_trans_huge_deposit(dst_mm, dst_pmd, + pgtable); + set_pmd_at(dst_mm, addr, dst_pmd, pmd); + /* make sure dst_mm is on swapoff's mmlist. */ + if (unlikely(list_empty(&dst_mm->mmlist))) { + spin_lock(&mmlist_lock); + if (list_empty(&dst_mm->mmlist)) + list_add(&dst_mm->mmlist, + &src_mm->mmlist); + spin_unlock(&mmlist_lock); + } + } else if (ret == -ENOTDIR) { + /* + * The swap cluster has been split, split the + * pmd map now + */ + __split_huge_swap_pmd(vma, addr, src_pmd); + pte_free(dst_mm, pgtable); + } else if (ret == -ENOMEM) { + spin_unlock(src_ptl); + spin_unlock(dst_ptl); + ret = add_swap_count_continuation(entry, + GFP_KERNEL); + if (ret < 0) { + ret = -ENOMEM; + pte_free(dst_mm, pgtable); + goto out; + } + goto retry; + } else + VM_BUG_ON(1); + goto out_unlock; + } + VM_BUG_ON(1); + } if (unlikely(!pmd_trans_huge(pmd))) { pte_free(dst_mm, pgtable); -- 2.17.0
Powered by blists - more mailing lists