[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191016073731.4076725-4-songliubraving@fb.com>
Date: Wed, 16 Oct 2019 00:37:30 -0700
From: Song Liu <songliubraving@...com>
To: <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
<akpm@...ux-foundation.org>
CC: <matthew.wilcox@...cle.com>, <kernel-team@...com>,
<william.kucharski@...cle.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Song Liu <songliubraving@...com>
Subject: [PATCH 3/4] mm/thp: allow drop THP from page cache
From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Once a THP is added to the page cache, it cannot be dropped via
/proc/sys/vm/drop_caches. Fix this issue with proper handling in
invalidate_mapping_pages() and __remove_mapping().
Fixes: 99cb0dbd47a1 ("mm,thp: add read-only THP support for (non-shmem) FS")
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
Tested-by: Song Liu <songliubraving@...com>
Signed-off-by: Song Liu <songliubraving@...com>
---
mm/truncate.c | 12 ++++++++++++
mm/vmscan.c | 3 ++-
2 files changed, 14 insertions(+), 1 deletion(-)
diff --git a/mm/truncate.c b/mm/truncate.c
index 8563339041f6..dd9ebc1da356 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -592,6 +592,16 @@ unsigned long invalidate_mapping_pages(struct address_space *mapping,
unlock_page(page);
continue;
}
+
+ /* Take a pin outside pagevec */
+ get_page(page);
+
+ /*
+ * Drop extra pins before trying to invalidate
+ * the huge page.
+ */
+ pagevec_remove_exceptionals(&pvec);
+ pagevec_release(&pvec);
}
ret = invalidate_inode_page(page);
@@ -602,6 +612,8 @@ unsigned long invalidate_mapping_pages(struct address_space *mapping,
*/
if (!ret)
deactivate_file_page(page);
+ if (PageTransHuge(page))
+ put_page(page);
count += ret;
}
pagevec_remove_exceptionals(&pvec);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c6659bb758a4..1d80a188ad4a 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -932,7 +932,8 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
* Note that if SetPageDirty is always performed via set_page_dirty,
* and thus under the i_pages lock, then this ordering is not required.
*/
- if (unlikely(PageTransHuge(page)) && PageSwapCache(page))
+ if (unlikely(PageTransHuge(page)) &&
+ (PageSwapCache(page) || !PageSwapBacked(page)))
refcount = 1 + HPAGE_PMD_NR;
else
refcount = 2;
--
2.17.1
Powered by blists - more mailing lists