[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YcDRC7e0fNAMYi3m@casper.infradead.org>
Date: Mon, 20 Dec 2021 18:52:59 +0000
From: Matthew Wilcox <willy@...radead.org>
To: David Hildenbrand <david@...hat.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Nadav Amit <namit@...are.com>,
Jason Gunthorpe <jgg@...dia.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Hugh Dickins <hughd@...gle.com>,
David Rientjes <rientjes@...gle.com>,
Shakeel Butt <shakeelb@...gle.com>,
John Hubbard <jhubbard@...dia.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Mike Rapoport <rppt@...ux.ibm.com>,
Yang Shi <shy828301@...il.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Vlastimil Babka <vbabka@...e.cz>, Jann Horn <jannh@...gle.com>,
Michal Hocko <mhocko@...nel.org>,
Rik van Riel <riel@...riel.com>,
Roman Gushchin <guro@...com>,
Andrea Arcangeli <aarcange@...hat.com>,
Peter Xu <peterx@...hat.com>,
Donald Dutile <ddutile@...hat.com>,
Christoph Hellwig <hch@....de>,
Oleg Nesterov <oleg@...hat.com>, Jan Kara <jack@...e.cz>,
Linux-MM <linux-mm@...ck.org>,
"open list:KERNEL SELFTEST FRAMEWORK"
<linux-kselftest@...r.kernel.org>,
"open list:DOCUMENTATION" <linux-doc@...r.kernel.org>
Subject: Re: [PATCH v1 06/11] mm: support GUP-triggered unsharing via
FAULT_FLAG_UNSHARE (!hugetlb)
On Mon, Dec 20, 2021 at 06:37:30PM +0000, Matthew Wilcox wrote:
> +++ b/mm/memory.c
> @@ -3626,7 +3626,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
> dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS);
> pte = mk_pte(page, vma->vm_page_prot);
> - if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page, NULL)) {
> + if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page)) {
> pte = maybe_mkwrite(pte_mkdirty(pte), vma);
> vmf->flags &= ~FAULT_FLAG_WRITE;
> ret |= VM_FAULT_WRITE;
[...]
> @@ -1673,17 +1665,14 @@ static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount,
> * reuse_swap_page() returns false, but it may be always overwritten
> * (see the other implementation for CONFIG_SWAP=n).
> */
> -bool reuse_swap_page(struct page *page, int *total_map_swapcount)
> +bool reuse_swap_page(struct page *page)
> {
> - int count, total_mapcount, total_swapcount;
> + int count, total_swapcount;
>
> VM_BUG_ON_PAGE(!PageLocked(page), page);
> if (unlikely(PageKsm(page)))
> return false;
> - count = page_trans_huge_map_swapcount(page, &total_mapcount,
> - &total_swapcount);
> - if (total_map_swapcount)
> - *total_map_swapcount = total_mapcount + total_swapcount;
> + count = page_trans_huge_map_swapcount(page, &total_swapcount);
> if (count == 1 && PageSwapCache(page) &&
> (likely(!PageTransCompound(page)) ||
> /* The remaining swap count will be freed soon */
It makes me wonder if reuse_swap_page() can also be based on refcount
instead of mapcount?
Powered by blists - more mailing lists