[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <765d450c-2ff6-1420-b94f-528d56bffee4@oracle.com>
Date: Thu, 30 Mar 2023 10:02:52 -0700
From: Sidhartha Kumar <sidhartha.kumar@...cle.com>
To: Peng Zhang <zhangpeng362@...wei.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
willy@...radead.org, mike.kravetz@...cle.com
Cc: vishal.moola@...il.com, muchun.song@...ux.dev,
wangkefeng.wang@...wei.com, sunnanyong@...wei.com
Subject: Re: [PATCH v4 1/6] userfaultfd: convert mfill_atomic_pte_copy() to
use a folio
On 3/30/23 6:40 AM, Peng Zhang wrote:
> From: ZhangPeng <zhangpeng362@...wei.com>
>
> Call vma_alloc_folio() directly instead of alloc_page_vma(). Add an
> assertion that this is a single-page folio and removes several calls to
> compound_head().
>
> Signed-off-by: ZhangPeng <zhangpeng362@...wei.com>
> ---
> mm/userfaultfd.c | 25 +++++++++++++------------
> 1 file changed, 13 insertions(+), 12 deletions(-)
>
> diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> index 7f1b5f8b712c..efa9e1d681ee 100644
> --- a/mm/userfaultfd.c
> +++ b/mm/userfaultfd.c
> @@ -137,15 +137,15 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd,
> {
> void *page_kaddr;
> int ret;
> - struct page *page;
> + struct folio *folio;
>
> if (!*pagep) {
> ret = -ENOMEM;
> - page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, dst_vma, dst_addr);
> - if (!page)
> + folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, dst_vma, dst_addr, false);
> + if (!folio)
> goto out;
>
> - page_kaddr = kmap_local_page(page);
> + page_kaddr = kmap_local_folio(folio, 0);
Should this variable name be kept as page_kaddr or should it be changed
to something like folio_kaddr? kmap_local_folio() returns
page_address(), so maybe page_kaddr is better.
> /*
> * The read mmap_lock is held here. Despite the
> * mmap_lock being read recursive a deadlock is still
> @@ -171,36 +171,37 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd,
> /* fallback to copy_from_user outside mmap_lock */
> if (unlikely(ret)) {
> ret = -ENOENT;
> - *pagep = page;
> + *pagep = &folio->page;
> /* don't free the page */
> goto out;
> }
>
> - flush_dcache_page(page);
> + flush_dcache_folio(folio);
> } else {
> - page = *pagep;
> + folio = page_folio(*pagep);
> + VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
> *pagep = NULL;
> }
>
> /*
> - * The memory barrier inside __SetPageUptodate makes sure that
> + * The memory barrier inside __folio_mark_uptodate makes sure that
> * preceding stores to the page contents become visible before
> * the set_pte_at() write.
> */
> - __SetPageUptodate(page);
> + __folio_mark_uptodate(folio);
>
> ret = -ENOMEM;
> - if (mem_cgroup_charge(page_folio(page), dst_vma->vm_mm, GFP_KERNEL))
> + if (mem_cgroup_charge(folio, dst_vma->vm_mm, GFP_KERNEL))
> goto out_release;
>
> ret = mfill_atomic_install_pte(dst_pmd, dst_vma, dst_addr,
> - page, true, flags);
> + &folio->page, true, flags);
> if (ret)
> goto out_release;
> out:
> return ret;
> out_release:
> - put_page(page);
> + folio_put(folio);
> goto out;
> }
Thanks,
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@...cle.com>
>
Powered by blists - more mailing lists