[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250208184928.219960-1-richard120310@gmail.com>
Date: Sun, 9 Feb 2025 02:49:28 +0800
From: I Hsin Cheng <richard120310@...il.com>
To: akpm@...ux-foundation.org
Cc: linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
I Hsin Cheng <richard120310@...il.com>
Subject: [PATCH] mm: pgtable: Ensure pml spinlock gets unlock
When !start_pte is true, the "pml" spinlock is still being holded and
the branch "out_pte" is taken. If "ptl" is equal to "pml", the lock
"pml" will still be locked when the function returns.
It'll be better to set a new branch "out_pte" and jump to it when
!start_pte is true at the first place, therefore no additional check for
"start_pte" or "ptl != pml" is needed, simply unlock "pml" and return.
Signed-off-by: I Hsin Cheng <richard120310@...il.com>
---
mm/pt_reclaim.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/mm/pt_reclaim.c b/mm/pt_reclaim.c
index 7e9455a18aae..163e38f1728d 100644
--- a/mm/pt_reclaim.c
+++ b/mm/pt_reclaim.c
@@ -43,7 +43,7 @@ void try_to_free_pte(struct mm_struct *mm, pmd_t *pmd, unsigned long addr,
pml = pmd_lock(mm, pmd);
start_pte = pte_offset_map_rw_nolock(mm, pmd, addr, &pmdval, &ptl);
if (!start_pte)
- goto out_ptl;
+ goto out_pte;
if (ptl != pml)
spin_lock_nested(ptl, SINGLE_DEPTH_NESTING);
@@ -68,4 +68,8 @@ void try_to_free_pte(struct mm_struct *mm, pmd_t *pmd, unsigned long addr,
pte_unmap_unlock(start_pte, ptl);
if (ptl != pml)
spin_unlock(pml);
+ return;
+
+out_pte:
+ spin_unlock(pml);
}
--
2.43.0
Powered by blists - more mailing lists