[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Z6nRIriSSJesFQDj@vaxr-BM6660-BM6360>
Date: Mon, 10 Feb 2025 18:12:50 +0800
From: I Hsin Cheng <richard120310@...il.com>
To: Qi Zheng <zhengqi.arch@...edance.com>
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: pgtable: Ensure pml spinlock gets unlock
On Mon, Feb 10, 2025 at 04:42:13PM +0800, Qi Zheng wrote:
>
>
> On 2025/2/10 16:31, Qi Zheng wrote:
> >
>
> [...]
>
> > > > >
> > > > > diff --git a/mm/pt_reclaim.c b/mm/pt_reclaim.c
> > > > > index 7e9455a18aae..163e38f1728d 100644
> > > > > --- a/mm/pt_reclaim.c
> > > > > +++ b/mm/pt_reclaim.c
> > > > > @@ -43,7 +43,7 @@ void try_to_free_pte(struct mm_struct *mm,
> > > > > pmd_t *pmd, unsigned long addr,
> > > > > pml = pmd_lock(mm, pmd);
> > > > > start_pte = pte_offset_map_rw_nolock(mm, pmd, addr,
> > > > > &pmdval, &ptl);
> > > > > if (!start_pte)
>
> Maybe we can return directly here:
>
> if (!start_pte) {
> spin_unlock(pml);
> return;
> }
>
> > > > > - goto out_ptl;
> > > > > + goto out_pte;
> > > > > if (ptl != pml)
> > > > > spin_lock_nested(ptl, SINGLE_DEPTH_NESTING);
> > > > > @@ -68,4 +68,8 @@ void try_to_free_pte(struct mm_struct *mm,
> > > > > pmd_t *pmd, unsigned long addr,
> > > > > pte_unmap_unlock(start_pte, ptl);
> > > > > if (ptl != pml)
> > > > > spin_unlock(pml);
> > > > > + return;
> > > > > +
> > > > > +out_pte:
> > > > > + spin_unlock(pml);
> > > > > }
> > >
I've send a new patch stating for this change and change the title,
because pml will sure get unlocked, we just prevent the redundant
branches.
Best regards,
I Hsin Cheng
Powered by blists - more mailing lists