[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20230713103407.902e24dc90e85a9779ba885c@linux-foundation.org>
Date: Thu, 13 Jul 2023 10:34:07 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Mike Kravetz <mike.kravetz@...cle.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Jiaqi Yan <jiaqiyan@...gle.com>,
Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
Muchun Song <songmuchun@...edance.com>,
Miaohe Lin <linmiaohe@...wei.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
James Houghton <jthoughton@...gle.com>,
Michal Hocko <mhocko@...e.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Subject: Re: [PATCH 0/2] Fix hugetlb free path race with memory errors
On Tue, 11 Jul 2023 15:09:40 -0700 Mike Kravetz <mike.kravetz@...cle.com> wrote:
> In the discussion of Jiaqi Yan's series "Improve hugetlbfs read on
> HWPOISON hugepages" the race window was discovered.
> https://lore.kernel.org/linux-mm/20230616233447.GB7371@monkey/
>
> Freeing a hugetlb page back to low level memory allocators is performed
> in two steps.
> 1) Under hugetlb lock, remove page from hugetlb lists and clear destructor
> 2) Outside lock, allocate vmemmap if necessary and call low level free
> Between these two steps, the hugetlb page will appear as a normal
> compound page. However, vmemmap for tail pages could be missing.
> If a memory error occurs at this time, we could try to update page
> flags non-existant page structs.
>
> A much more detailed description is in the first patch.
>
> The first patch addresses the race window. However, it adds a
> hugetlb_lock lock/unlock cycle to every vmemmap optimized hugetlb
> page free operation. This could lead to slowdowns if one is freeing
> a large number of hugetlb pages.
>
> The second path optimizes the update_and_free_pages_bulk routine
> to only take the lock once in bulk operations.
>
> The second patch is technically not a bug fix, but includes a Fixes
> tag and Cc stable to avoid a performance regression. It can be
> combined with the first, but was done separately make reviewing easier.
>
I feel that backporting performance improvements into -stable is not a
usual thing to do. Perhaps the fact that it's a regression fix changes
this, but why?
Much hinges on the magnitude of the performance change. Are you able
to quantify this at all?
Powered by blists - more mailing lists