[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210901160742.GR1200268@ziepe.ca>
Date: Wed, 1 Sep 2021 13:07:42 -0300
From: Jason Gunthorpe <jgg@...pe.ca>
To: David Hildenbrand <david@...hat.com>
Cc: Qi Zheng <zhengqi.arch@...edance.com>, akpm@...ux-foundation.org,
tglx@...utronix.de, hannes@...xchg.org, mhocko@...nel.org,
vdavydov.dev@...il.com, kirill.shutemov@...ux.intel.com,
mika.penttila@...tfour.com, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
songmuchun@...edance.com
Subject: Re: [PATCH v2 0/9] Free user PTE page table pages
On Wed, Sep 01, 2021 at 02:32:08PM +0200, David Hildenbrand wrote:
> b) pmd_trans_unstable_or_pte_try_get() and friends are really ugly.
I suspect the good API here is really more like:
ptep = pte_try_map(pmdp, &pmd_value)
if (!ptep) {
// pmd_value is guarenteed to not be a PTE table pointer.
if (pmd_XXX(pmd_value))
}
Ie the core code will do whatever stuff, including the THP data race
avoidance, to either return the next level page table or the value of
a pmd that is not a enxt level page table. Callers are much clearer in
this way.
Eg this is a fairly representative sample user:
static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
struct mm_walk *walk)
{
if (pmd_trans_unstable(pmd))
goto out;
pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
And it is obviously pretty easy to integrate any refcount into
pte_try_map and pte_unmap as in my other email.
Jason
Powered by blists - more mailing lists