[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <338b4d56-7e5a-4d8f-8908-610f2c59e29e@redhat.com>
Date: Tue, 21 May 2024 11:56:54 +0200
From: David Hildenbrand <david@...hat.com>
To: Oscar Salvador <osalvador@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Muchun Song <muchun.song@...ux.dev>, Vishal Moola <vishal.moola@...il.com>
Subject: Re: [PATCH] mm/hugetlb: Move vmf_anon_prepare upfront in hugetlb_wp
On 21.05.24 09:34, Oscar Salvador wrote:
> hugetlb_wp calls vmf_anon_prepare() after having allocated a page, which
> means that we might need to call restore_reserve_on_error() upon error.
> vmf_anon_prepare() releases the vma lock before returning, but
> restore_reserve_on_error() expects the vma lock to be held by the caller.
>
> Fix it by calling vmf_anon_prepare() before allocating the page.
>
> Signed-off-by: Oscar Salvador <osalvador@...e.de>
> Fixes: 9acad7ba3e25 ("hugetlb: use vmf_anon_prepare() instead of anon_vma_prepare()")
> ---
> I did not hit this bug, I just spotted this because I was looking at hugetlb_wp
> for some other reason. And I did not want to get creative to see if I could
> trigger this so I could get a backtrace.
> My assumption is that we could trigger this if 1) this was a shared mapping,
> so no anon_vma and 2) we call in GUP code with FOLL_WRITE, which would cause
> the FLAG_UNSHARE to be passed, so we will end up in hugetlb_wp().
FOLL_WRITE should never result in FLAG_UNSHARE.
>
> mm/hugetlb.c | 17 +++++++++--------
> 1 file changed, 9 insertions(+), 8 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 6be78e7d4f6e..eb0d8a45505e 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -6005,6 +6005,15 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio,
> * be acquired again before returning to the caller, as expected.
> */
> spin_unlock(vmf->ptl);
> +
> + /*
> + * When the original hugepage is shared one, it does not have
> + * anon_vma prepared.
> + */
> + ret = vmf_anon_prepare(vmf);
> + if (unlikely(ret))
> + goto out_release_old;
> +
> new_folio = alloc_hugetlb_folio(vma, vmf->address, outside_reserve);
>
> if (IS_ERR(new_folio)) {
> @@ -6058,14 +6067,6 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio,
> goto out_release_old;
> }
>
> - /*
> - * When the original hugepage is shared one, it does not have
> - * anon_vma prepared.
> - */
> - ret = vmf_anon_prepare(vmf);
> - if (unlikely(ret))
> - goto out_release_all;
> -
> if (copy_user_large_folio(new_folio, old_folio, vmf->real_address, vma)) {
> ret = VM_FAULT_HWPOISON_LARGE | VM_FAULT_SET_HINDEX(hstate_index(h));
> goto out_release_all;
The joy of hugetlb reservation code.
LGTM
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists