[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yv7AlZyNaAgpB4Qg@monkey>
Date: Thu, 18 Aug 2022 15:43:33 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Miaohe Lin <linmiaohe@...wei.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Muchun Song <songmuchun@...edance.com>,
Linux-MM <linux-mm@...ck.org>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [bug report] mm/hugetlb: various bugs with avoid_reserve case in
alloc_huge_page()
On 08/17/22 16:31, Miaohe Lin wrote:
> Hi all:
> When I investigate the mm/hugetlb.c code again, I found there are a few possible issues
> with avoid_reserve case. (It's really hard to follow the relevant code for me.) Please take
> a look at the below analysis:
Thank you for taking a close look at this code!
I agree that the code is hard to follow. I have spent many hours/days/weeks
chasing down the cause of incorrect reservation counts. I imagine there could
be more issues, especially when you add the uncommon avoid_reserve and
MAP_NORESERVE processing.
> 1.avoid_reserve issue with h->resv_huge_pages in alloc_huge_page.
Did you actually see this issue, or is it just based on code inspection?
I tried to recreate, but could not. When looking closer, this may not
even be possible.
> Assume:
> h->free_huge_pages 60
> h->resv_huge_pages 30
> spool->rsv_hpages 30
OK.
>
> When avoid_reserve is true, after alloc_huge_page(), we will have:
Take a close look at the calling paths for alloc_huge_page when avoid_reserve
is true. There are only two such call paths.
1) copy_hugetlb_page_range - We allocate pages in the 'early COW' processing.
In such cases, the pages are private and not associated with a file, or
filesystem or subpool (spool). Therefore, there should be no spool
modifications.
2) hugetlb_wp (formerly called hugetlb_cow) - Again, we are allocating a
private page and should not be modifying spool.
If the above is correct, then we will not modify spool->rsv_hpages which
leads to the inconsistent results.
It is confusing that MAP_NORESERVE does not imply avoid_reserve will be
passed to alloc_huge_page.
> spool->rsv_hpages 29 /* hugepage_subpool_get_pages decreases it. */
> h->free_huge_pages 59
> h->resv_huge_pages 30 /* rsv_hpages is used, but *h->resv_huge_pages is not modified accordingly*. */
>
> If the hugetlb page is freed later, we will have:
> spool->rsv_hpages 30 /* hugepage_subpool_put_pages increases it. */
> h->free_huge_pages 60
> h->resv_huge_pages 31 /* *increased wrongly* due to hugepage_subpool_put_pages(spool, 1) == 0. */
> ^^
>
I'll take a closer look at 2 and 3 when we determine if 1 is a possible
issue or not.
--
Mike Kravetz
> 2.avoid_reserve issue with hugetlb rsvd cgroup charge for private mappings in alloc_huge_page.
>
> In general, if hugetlb pages are reserved, corresponding rsvd counters are charged in resv_maps
> for private mappings. Otherwise they're charged in individual hugetlb pages. When alloc_huge_page()
> is called with avoid_reserve == true, hugetlb_cgroup_charge_cgroup_rsvd() will be called to charge
> the newly allocated hugetlb page even if there has a reservation for this page in resv_maps. Then
> vma_commit_reservation() is called to indicate that the reservation is consumed. So the reservation
> *can not be used, thus leaking* from now on because vma_needs_reservation always return 1 for it.
>
> 3.avoid_reserve issue with restore_reserve_on_error
>
> There's a assumption in restore_reserve_on_error(): If HPageRestoreReserve is not set, this indicates
> there is an entry in the reserve map added by alloc_huge_page or HPageRestoreReserve would be set on the
> page. But this assumption *does not hold for avoid_reserve*. HPageRestoreReserve won't be set even if there
> is already an entry in the reserve map for avoid_reserve case. So avoid_reserve should be considered in this
> function, i.e. we need *a reliable way* to determine whether the entry is added by the alloc_huge_page().
>
> Are above issues possible? Or am I miss something? These possible issues seem not easy to fix for me.
> Any thoughts? Any response would be appreciated!
>
> Thanks!
> Miaohe Lin
Powered by blists - more mailing lists