lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 1 Nov 2017 18:51:14 +0800 From: <zhouxianrong@...wei.com> To: <linux-mm@...ck.org> CC: <linux-kernel@...r.kernel.org>, <akpm@...ux-foundation.org>, <ying.huang@...el.com>, <tim.c.chen@...ux.intel.com>, <mhocko@...e.com>, <rientjes@...gle.com>, <mingo@...nel.org>, <vegard.nossum@...cle.com>, <minchan@...nel.org>, <aaron.lu@...el.com>, <zhouxianrong@...wei.com>, <zhouxiyu@...wei.com>, <weidu.du@...wei.com>, <fanghua3@...wei.com>, <hutj@...wei.com>, <won.ho.park@...wei.com> Subject: [PATCH] mm: extend reuse_swap_page range as much as possible From: zhouxianrong <zhouxianrong@...wei.com> origanlly reuse_swap_page requires that the sum of page's mapcount and swapcount less than or equal to one. in this case we can reuse this page and avoid COW currently. now reuse_swap_page requires only that page's mapcount less than or equal to one and the page is not dirty in swap cache. in this case we do not care its swap count. the page without dirty in swap cache means that it has been written to swap device successfully for reclaim before and then read again on a swap fault. in this case the page can be reused even though its swap count is greater than one and postpone the COW on other successive accesses to the swap cache page later rather than now. i did this patch test in kernel 4.4.23 with arm64 and none huge memory. it work fine. Signed-off-by: zhouxianrong <zhouxianrong@...wei.com> --- mm/swapfile.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index bf91dc9..c21cf07 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1543,22 +1543,27 @@ static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount, bool reuse_swap_page(struct page *page, int *total_map_swapcount) { int count, total_mapcount, total_swapcount; + int dirty; VM_BUG_ON_PAGE(!PageLocked(page), page); if (unlikely(PageKsm(page))) return false; + dirty = PageDirty(page); count = page_trans_huge_map_swapcount(page, &total_mapcount, &total_swapcount); if (total_map_swapcount) *total_map_swapcount = total_mapcount + total_swapcount; - if (count == 1 && PageSwapCache(page) && + if ((total_mapcount <= 1 && !dirty) || + (count == 1 && PageSwapCache(page) && (likely(!PageTransCompound(page)) || /* The remaining swap count will be freed soon */ - total_swapcount == page_swapcount(page))) { + total_swapcount == page_swapcount(page)))) { if (!PageWriteback(page)) { page = compound_head(page); delete_from_swap_cache(page); SetPageDirty(page); + if (!dirty) + return true; } else { swp_entry_t entry; struct swap_info_struct *p; -- 1.7.9.5
Powered by blists - more mailing lists