[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240419141134.464ea3a1ef3d0e93c6711c93@linux-foundation.org>
Date: Fri, 19 Apr 2024 14:11:34 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Miaohe Lin <linmiaohe@...wei.com>
Cc: <muchun.song@...ux.dev>, <mike.kravetz@...cle.com>, <osalvador@...e.de>,
<linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] mm/hugetlb: fix DEBUG_LOCKS_WARN_ON(1) when
dissolve_free_hugetlb_folio()
On Fri, 19 Apr 2024 16:58:19 +0800 Miaohe Lin <linmiaohe@...wei.com> wrote:
> When I did memory failure tests recently, below warning occurs:
>
> DEBUG_LOCKS_WARN_ON(1)
> WARNING: CPU: 8 PID: 1011 at kernel/locking/lockdep.c:232 __lock_acquire+0xccb/0x1ca0
> Modules linked in: mce_inject hwpoison_inject
> CPU: 8 PID: 1011 Comm: bash Kdump: loaded Not tainted 6.9.0-rc3-next-20240410-00012-gdb69f219f4be #3
>
> ...
>
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1773,7 +1773,7 @@ static void __update_and_free_hugetlb_folio(struct hstate *h,
> * If vmemmap pages were allocated above, then we need to clear the
> * hugetlb flag under the hugetlb lock.
> */
> - if (clear_flag) {
> + if (folio_test_hugetlb(folio)) {
> spin_lock_irq(&hugetlb_lock);
> __folio_clear_hugetlb(folio);
> spin_unlock_irq(&hugetlb_lock);
Please let's prepare backportable fixes against current mainline, not
mm-unstable. Because fixes against current -rcX and earlier will be
upstreamed ahead of the mm-unstable and mm-stable material.
I did this:
--- a/mm/hugetlb.c~mm-hugetlb-fix-debug_locks_warn_on1-when-dissolve_free_hugetlb_folio
+++ a/mm/hugetlb.c
@@ -1781,7 +1781,7 @@ static void __update_and_free_hugetlb_fo
* If vmemmap pages were allocated above, then we need to clear the
* hugetlb destructor under the hugetlb lock.
*/
- if (clear_dtor) {
+ if (folio_test_hugetlb(folio)) {
spin_lock_irq(&hugetlb_lock);
__clear_hugetlb_destructor(h, folio);
spin_unlock_irq(&hugetlb_lock);
_
Powered by blists - more mailing lists