[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230523180703.89902-1-sj@kernel.org>
Date: Tue, 23 May 2023 18:07:03 +0000
From: SeongJae Park <sj@...nel.org>
To: Hugh Dickins <hughd@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Mike Rapoport <rppt@...nel.org>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Matthew Wilcox <willy@...radead.org>,
David Hildenbrand <david@...hat.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Qi Zheng <zhengqi.arch@...edance.com>,
Yang Shi <shy828301@...il.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Peter Xu <peterx@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Will Deacon <will@...nel.org>, Yu Zhao <yuzhao@...gle.com>,
Alistair Popple <apopple@...dia.com>,
Ralph Campbell <rcampbell@...dia.com>,
Ira Weiny <ira.weiny@...el.com>,
Steven Price <steven.price@....com>,
SeongJae Park <sj@...nel.org>,
Naoya Horiguchi <naoya.horiguchi@....com>,
Christophe Leroy <christophe.leroy@...roup.eu>,
Zack Rusin <zackr@...are.com>, Jason Gunthorpe <jgg@...pe.ca>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Anshuman Khandual <anshuman.khandual@....com>,
Pasha Tatashin <pasha.tatashin@...een.com>,
Miaohe Lin <linmiaohe@...wei.com>,
Minchan Kim <minchan@...nel.org>,
Christoph Hellwig <hch@...radead.org>,
Song Liu <song@...nel.org>,
Thomas Hellstrom <thomas.hellstrom@...ux.intel.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 09/31] mm/pagewalkers: ACTION_AGAIN if pte_offset_map_lock() fails
Hello Hugh,
On Sun, 21 May 2023 22:00:15 -0700 (PDT) Hugh Dickins <hughd@...gle.com> wrote:
> Simple walk_page_range() users should set ACTION_AGAIN to retry when
> pte_offset_map_lock() fails.
>
> No need to check pmd_trans_unstable(): that was precisely to avoid the
> possiblity of calling pte_offset_map() on a racily removed or inserted
> THP entry, but such cases are now safely handled inside it. Likewise
> there is no need to check pmd_none() or pmd_bad() before calling it.
>
> Signed-off-by: Hugh Dickins <hughd@...gle.com>
For below mm/damon part,
Reviewed-by: SeongJae Park <sj@...nel.org>
Thanks,
SJ
> ---
> fs/proc/task_mmu.c | 32 ++++++++++++++++----------------
> mm/damon/vaddr.c | 12 ++++++++----
> mm/mempolicy.c | 7 ++++---
> mm/mincore.c | 9 ++++-----
> mm/mlock.c | 4 ++++
> 5 files changed, 36 insertions(+), 28 deletions(-)
>
[...]
> diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
> index 1fec16d7263e..b8762ff15c3c 100644
> --- a/mm/damon/vaddr.c
> +++ b/mm/damon/vaddr.c
> @@ -318,9 +318,11 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr,
> spin_unlock(ptl);
> }
>
> - if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
> - return 0;
> pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
> + if (!pte) {
> + walk->action = ACTION_AGAIN;
> + return 0;
> + }
> if (!pte_present(*pte))
> goto out;
> damon_ptep_mkold(pte, walk->mm, addr);
> @@ -464,9 +466,11 @@ static int damon_young_pmd_entry(pmd_t *pmd, unsigned long addr,
> regular_page:
> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
> - if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
> - return -EINVAL;
> pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
> + if (!pte) {
> + walk->action = ACTION_AGAIN;
> + return 0;
> + }
> if (!pte_present(*pte))
> goto out;
> folio = damon_get_folio(pte_pfn(*pte));
[...]
Powered by blists - more mailing lists