[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87tvyd4fsx.fsf@yhuang-dev.intel.com>
Date: Thu, 02 Nov 2017 09:42:22 +0800
From: "Huang\, Ying" <ying.huang@...el.com>
To: <zhouxianrong@...wei.com>
Cc: <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
<akpm@...ux-foundation.org>, <ying.huang@...el.com>,
<tim.c.chen@...ux.intel.com>, <mhocko@...e.com>,
<rientjes@...gle.com>, <mingo@...nel.org>,
<vegard.nossum@...cle.com>, <minchan@...nel.org>,
<aaron.lu@...el.com>, <zhouxiyu@...wei.com>, <weidu.du@...wei.com>,
<fanghua3@...wei.com>, <hutj@...wei.com>, <won.ho.park@...wei.com>
Subject: Re: [PATCH] mm: extend reuse_swap_page range as much as possible
<zhouxianrong@...wei.com> writes:
> From: zhouxianrong <zhouxianrong@...wei.com>
>
> origanlly reuse_swap_page requires that the sum of page's
> mapcount and swapcount less than or equal to one.
> in this case we can reuse this page and avoid COW currently.
>
> now reuse_swap_page requires only that page's mapcount
> less than or equal to one and the page is not dirty in
> swap cache. in this case we do not care its swap count.
>
> the page without dirty in swap cache means that it has
> been written to swap device successfully for reclaim before
> and then read again on a swap fault. in this case the page
> can be reused even though its swap count is greater than one
> and postpone the COW on other successive accesses to the swap
> cache page later rather than now.
>
> i did this patch test in kernel 4.4.23 with arm64 and none huge
> memory. it work fine.
Why do you need this? You saved copying one page from memory to memory
(COW) now, at the cost of reading a page from disk to memory later?
Best Regards,
Huang, Ying
> Signed-off-by: zhouxianrong <zhouxianrong@...wei.com>
> ---
> mm/swapfile.c | 9 +++++++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index bf91dc9..c21cf07 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1543,22 +1543,27 @@ static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount,
> bool reuse_swap_page(struct page *page, int *total_map_swapcount)
> {
> int count, total_mapcount, total_swapcount;
> + int dirty;
>
> VM_BUG_ON_PAGE(!PageLocked(page), page);
> if (unlikely(PageKsm(page)))
> return false;
> + dirty = PageDirty(page);
> count = page_trans_huge_map_swapcount(page, &total_mapcount,
> &total_swapcount);
> if (total_map_swapcount)
> *total_map_swapcount = total_mapcount + total_swapcount;
> - if (count == 1 && PageSwapCache(page) &&
> + if ((total_mapcount <= 1 && !dirty) ||
> + (count == 1 && PageSwapCache(page) &&
> (likely(!PageTransCompound(page)) ||
> /* The remaining swap count will be freed soon */
> - total_swapcount == page_swapcount(page))) {
> + total_swapcount == page_swapcount(page)))) {
> if (!PageWriteback(page)) {
> page = compound_head(page);
> delete_from_swap_cache(page);
> SetPageDirty(page);
> + if (!dirty)
> + return true;
> } else {
> swp_entry_t entry;
> struct swap_info_struct *p;
Powered by blists - more mailing lists