[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a9cbfa20-b83d-203c-85ef-77ffe9445913@redhat.com>
Date: Thu, 8 Dec 2022 14:14:42 +0100
From: David Hildenbrand <david@...hat.com>
To: Peter Xu <peterx@...hat.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Cc: Muchun Song <songmuchun@...edance.com>,
John Hubbard <jhubbard@...dia.com>,
Andrea Arcangeli <aarcange@...hat.com>,
James Houghton <jthoughton@...gle.com>,
Jann Horn <jannh@...gle.com>, Rik van Riel <riel@...riel.com>,
Miaohe Lin <linmiaohe@...wei.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
Nadav Amit <nadav.amit@...il.com>
Subject: Re: [PATCH v2 08/10] mm/hugetlb: Make walk_hugetlb_range() safe to
pmd unshare
On 07.12.22 21:30, Peter Xu wrote:
> Since walk_hugetlb_range() walks the pgtable, it needs the vma lock
> to make sure the pgtable page will not be freed concurrently.
>
> Reviewed-by: Mike Kravetz <mike.kravetz@...cle.com>
> Signed-off-by: Peter Xu <peterx@...hat.com>
> ---
> arch/s390/mm/gmap.c | 2 ++
> fs/proc/task_mmu.c | 2 ++
> include/linux/pagewalk.h | 11 ++++++++++-
> mm/hmm.c | 15 ++++++++++++++-
> mm/pagewalk.c | 2 ++
> 5 files changed, 30 insertions(+), 2 deletions(-)
>
> diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
> index 8947451ae021..292a54c490d4 100644
> --- a/arch/s390/mm/gmap.c
> +++ b/arch/s390/mm/gmap.c
> @@ -2643,7 +2643,9 @@ static int __s390_enable_skey_hugetlb(pte_t *pte, unsigned long addr,
> end = start + HPAGE_SIZE - 1;
> __storage_key_init_range(start, end);
> set_bit(PG_arch_1, &page->flags);
> + hugetlb_vma_unlock_read(walk->vma);
> cond_resched();
> + hugetlb_vma_lock_read(walk->vma);
> return 0;
> }
>
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index e35a0398db63..cf3887fb2905 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -1613,7 +1613,9 @@ static int pagemap_hugetlb_range(pte_t *ptep, unsigned long hmask,
> frame++;
> }
>
> + hugetlb_vma_unlock_read(walk->vma);
> cond_resched();
> + hugetlb_vma_lock_read(walk->vma);
We already hold the mmap_lock and reschedule. Even without the
cond_resched() we might happily reschedule on a preemptive kernel. So
I'm not sure if this optimization is strictly required or even helpful
in practice here.
In the worst case, concurrent unsharing would have to wait.
For example, s390_enable_skey() is called at most once for a VM, for
most VMs it gets never even called.
Or am I missing something important?
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists