[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e0bdbd5e-a098-422a-90af-9cf07ce378a4@redhat.com>
Date: Tue, 26 Mar 2024 18:02:22 +0100
From: David Hildenbrand <david@...hat.com>
To: Ryan Roberts <ryan.roberts@....com>, Mark Rutland <mark.rutland@....com>,
Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>, Ian Rogers <irogers@...gle.com>,
Adrian Hunter <adrian.hunter@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Muchun Song <muchun.song@...ux.dev>
Cc: linux-arm-kernel@...ts.infradead.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v1 3/4] mm/memory: Use ptep_get_lockless_norecency()
for orig_pte
On 15.02.24 13:17, Ryan Roberts wrote:
> Let's convert handle_pte_fault()'s use of ptep_get_lockless() to
> ptep_get_lockless_norecency() to save orig_pte.
>
> There are a number of places that follow this model:
>
> orig_pte = ptep_get_lockless(ptep)
> ...
> <lock>
> if (!pte_same(orig_pte, ptep_get(ptep)))
> // RACE!
> ...
> <unlock>
>
> So we need to be careful to convert all of those to use
> pte_same_norecency() so that the access and dirty bits are excluded from
> the comparison.
>
> Additionally there are a couple of places that genuinely rely on the
> access and dirty bits of orig_pte, but with some careful refactoring, we
> can use ptep_get() once we are holding the lock to achieve equivalent
> logic.
We really should document that changed behavior somewhere where it can
be easily found: that orig_pte might have incomplete/stale
accessed/dirty information.
> @@ -5343,7 +5356,7 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf)
> vmf->address, &vmf->ptl);
> if (unlikely(!vmf->pte))
> return 0;
> - vmf->orig_pte = ptep_get_lockless(vmf->pte);
> + vmf->orig_pte = ptep_get_lockless_norecency(vmf->pte);
> vmf->flags |= FAULT_FLAG_ORIG_PTE_VALID;
>
> if (pte_none(vmf->orig_pte)) {
> @@ -5363,7 +5376,7 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf)
>
> spin_lock(vmf->ptl);
> entry = vmf->orig_pte;
> - if (unlikely(!pte_same(ptep_get(vmf->pte), entry))) {
> + if (unlikely(!pte_same_norecency(ptep_get(vmf->pte), entry))) {
> update_mmu_tlb(vmf->vma, vmf->address, vmf->pte);
> goto unlock;
I was wondering about the following:
Assume the PTE is not dirty.
Thread 1 does
vmf->orig_pte = ptep_get_lockless_norecency(vmf->pte)
/* not dirty */
/* Now, thread 2 ends up setting the PTE dirty under PT lock. */
spin_lock(vmf->ptl);
entry = vmf->orig_pte;
if (unlikely(!pte_same(ptep_get(vmf->pte), entry))) {
...
}
..
entry = pte_mkyoung(entry);
if (ptep_set_access_flags(vmf->vma, ...)
..
pte_unmap_unlock(vmf->pte, vmf->ptl);
Generic ptep_set_access_flags() will do another pte_same() check and
realize "hey, there was a change!" let's update the PTE!
set_pte_at(vma->vm_mm, address, ptep, entry);
would overwrite the dirty bit set by thread 2.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists