[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180912004414.22583-18-ying.huang@intel.com>
Date: Wed, 12 Sep 2018 08:44:10 +0800
From: Huang Ying <ying.huang@...el.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Huang Ying <ying.huang@...el.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Michal Hocko <mhocko@...nel.org>,
Johannes Weiner <hannes@...xchg.org>,
Shaohua Li <shli@...nel.org>, Hugh Dickins <hughd@...gle.com>,
Minchan Kim <minchan@...nel.org>,
Rik van Riel <riel@...hat.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Zi Yan <zi.yan@...rutgers.edu>,
Daniel Jordan <daniel.m.jordan@...cle.com>
Subject: [PATCH -V5 RESEND 17/21] swap: Support PMD swap mapping for MADV_WILLNEED
During MADV_WILLNEED, for a PMD swap mapping, if THP swapin is enabled
for the VMA, the whole swap cluster will be swapin. Otherwise, the
huge swap cluster and the PMD swap mapping will be split and fallback
to PTE swap mapping.
Signed-off-by: "Huang, Ying" <ying.huang@...el.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Cc: Andrea Arcangeli <aarcange@...hat.com>
Cc: Michal Hocko <mhocko@...nel.org>
Cc: Johannes Weiner <hannes@...xchg.org>
Cc: Shaohua Li <shli@...nel.org>
Cc: Hugh Dickins <hughd@...gle.com>
Cc: Minchan Kim <minchan@...nel.org>
Cc: Rik van Riel <riel@...hat.com>
Cc: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
Cc: Zi Yan <zi.yan@...rutgers.edu>
Cc: Daniel Jordan <daniel.m.jordan@...cle.com>
---
mm/madvise.c | 26 ++++++++++++++++++++++++--
1 file changed, 24 insertions(+), 2 deletions(-)
diff --git a/mm/madvise.c b/mm/madvise.c
index 07ef599d4255..608c5ae201c6 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -196,14 +196,36 @@ static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start,
pte_t *orig_pte;
struct vm_area_struct *vma = walk->private;
unsigned long index;
+ swp_entry_t entry;
+ struct page *page;
+ pmd_t pmdval;
+
+ pmdval = *pmd;
+ if (IS_ENABLED(CONFIG_THP_SWAP) && is_swap_pmd(pmdval) &&
+ !is_pmd_migration_entry(pmdval)) {
+ entry = pmd_to_swp_entry(pmdval);
+ if (!transparent_hugepage_swapin_enabled(vma)) {
+ if (!split_swap_cluster(entry, 0))
+ split_huge_swap_pmd(vma, pmd, start, pmdval);
+ } else {
+ page = read_swap_cache_async(entry,
+ GFP_HIGHUSER_MOVABLE,
+ vma, start, false);
+ if (page) {
+ /* The swap cluster has been split under us */
+ if (!PageTransHuge(page))
+ split_huge_swap_pmd(vma, pmd, start,
+ pmdval);
+ put_page(page);
+ }
+ }
+ }
if (pmd_none_or_trans_huge_or_clear_bad(pmd))
return 0;
for (index = start; index != end; index += PAGE_SIZE) {
pte_t pte;
- swp_entry_t entry;
- struct page *page;
spinlock_t *ptl;
orig_pte = pte_offset_map_lock(vma->vm_mm, pmd, start, &ptl);
--
2.16.4
Powered by blists - more mailing lists