[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87353n9354.fsf@nvidia.com>
Date: Tue, 23 May 2023 12:39:54 +1000
From: Alistair Popple <apopple@...dia.com>
To: Qi Zheng <qi.zheng@...ux.dev>
Cc: Hugh Dickins <hughd@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
Mike Rapoport <rppt@...nel.org>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Matthew Wilcox <willy@...radead.org>,
David Hildenbrand <david@...hat.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Qi Zheng <zhengqi.arch@...edance.com>,
Yang Shi <shy828301@...il.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Peter Xu <peterx@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Will Deacon <will@...nel.org>, Yu Zhao <yuzhao@...gle.com>,
Ralph Campbell <rcampbell@...dia.com>,
Ira Weiny <ira.weiny@...el.com>,
Steven Price <steven.price@....com>,
SeongJae Park <sj@...nel.org>,
Naoya Horiguchi <naoya.horiguchi@....com>,
Christophe Leroy <christophe.leroy@...roup.eu>,
Zack Rusin <zackr@...are.com>, Jason Gunthorpe <jgg@...pe.ca>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Anshuman Khandual <anshuman.khandual@....com>,
Pasha Tatashin <pasha.tatashin@...een.com>,
Miaohe Lin <linmiaohe@...wei.com>,
Minchan Kim <minchan@...nel.org>,
Christoph Hellwig <hch@...radead.org>,
Song Liu <song@...nel.org>,
Thomas Hellstrom <thomas.hellstrom@...ux.intel.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 13/31] mm/hmm: retry if pte_offset_map() fails
Qi Zheng <qi.zheng@...ux.dev> writes:
> On 2023/5/22 13:05, Hugh Dickins wrote:
>> hmm_vma_walk_pmd() is called through mm_walk, but already has a goto
>> again loop of its own, so take part in that if pte_offset_map() fails.
>> Signed-off-by: Hugh Dickins <hughd@...gle.com>
>> ---
>> mm/hmm.c | 2 ++
>> 1 file changed, 2 insertions(+)
>> diff --git a/mm/hmm.c b/mm/hmm.c
>> index e23043345615..b1a9159d7c92 100644
>> --- a/mm/hmm.c
>> +++ b/mm/hmm.c
>> @@ -381,6 +381,8 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp,
>> }
>> ptep = pte_offset_map(pmdp, addr);
>> + if (!ptep)
>> + goto again;
>> for (; addr < end; addr += PAGE_SIZE, ptep++, hmm_pfns++) {
>> int r;
>>
>
> I haven't read the entire patch set yet, but taking a note here.
> The hmm_vma_handle_pte() will unmap pte and then call
> migration_entry_wait() to remap pte, so this may fail, we need to
> handle this case like below:
I don't see a problem here. Sure, hmm_vma_handle_pte() might return
-EBUSY but that will get returned up to hmm_range_fault() which will
retry the whole thing again and presumably fail when looking at the PMD.
> diff --git a/mm/hmm.c b/mm/hmm.c
> index 6a151c09de5e..eb726ff0981c 100644
> --- a/mm/hmm.c
> +++ b/mm/hmm.c
> @@ -276,7 +276,8 @@ static int hmm_vma_handle_pte(struct mm_walk
> *walk, unsigned long addr,
> if (is_migration_entry(entry)) {
> pte_unmap(ptep);
> hmm_vma_walk->last = addr;
> - migration_entry_wait(walk->mm, pmdp, addr);
> + if (!migration_entry_wait(walk->mm, pmdp, addr))
> + return -EAGAIN;
> return -EBUSY;
> }
>
> @@ -386,6 +387,8 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp,
>
> r = hmm_vma_handle_pte(walk, addr, end, pmdp, ptep,
> hmm_pfns);
> if (r) {
> + if (r == -EAGAIN)
> + goto again;
> /* hmm_vma_handle_pte() did pte_unmap() */
> return r;
> }
>
> Of course, the migration_entry_wait() also needs to be modified.
Powered by blists - more mailing lists