[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f3047412-a53c-f8ba-f8aa-4f46e04c5a31@linux.alibaba.com>
Date: Wed, 25 Oct 2023 11:03:06 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: "Yin, Fengwei" <fengwei.yin@...el.com>,
Barry Song <21cnbao@...il.com>
Cc: catalin.marinas@....com, will@...nel.org,
akpm@...ux-foundation.org, v-songbaohua@...o.com,
yuzhao@...gle.com, linux-mm@...ck.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] arm64: mm: drop tlb flush operation when clearing the
access bit
On 10/25/2023 9:39 AM, Yin, Fengwei wrote:
>
>>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>>> index 0bd18de9fd97..2979d796ba9d 100644
>>> --- a/arch/arm64/include/asm/pgtable.h
>>> +++ b/arch/arm64/include/asm/pgtable.h
>>> @@ -905,21 +905,22 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
>>> static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
>>> unsigned long address, pte_t *ptep)
>>> {
>>> - int young = ptep_test_and_clear_young(vma, address, ptep);
>>> -
>>> - if (young) {
>>> - /*
>>> - * We can elide the trailing DSB here since the worst that can
>>> - * happen is that a CPU continues to use the young entry in its
>>> - * TLB and we mistakenly reclaim the associated page. The
>>> - * window for such an event is bounded by the next
>>> - * context-switch, which provides a DSB to complete the TLB
>>> - * invalidation.
>>> - */
>>> - flush_tlb_page_nosync(vma, address);
>>> - }
>>> -
>>> - return young;
>>> + /*
>>> + * This comment is borrowed from x86, but applies equally to ARM64:
>>> + *
>>> + * Clearing the accessed bit without a TLB flush doesn't cause
>>> + * data corruption. [ It could cause incorrect page aging and
>>> + * the (mistaken) reclaim of hot pages, but the chance of that
>>> + * should be relatively low. ]
>>> + *
>>> + * So as a performance optimization don't flush the TLB when
>>> + * clearing the accessed bit, it will eventually be flushed by
>>> + * a context switch or a VM operation anyway. [ In the rare
>>> + * event of it not getting flushed for a long time the delay
>>> + * shouldn't really matter because there's no real memory
>>> + * pressure for swapout to react to. ]
>>> + */
>>> + return ptep_test_and_clear_young(vma, address, ptep);
>>> }
> From https://lore.kernel.org/lkml/20181029105515.GD14127@arm.com/:
>
> This is blindly copied from x86 and isn't true for us: we don't invalidate
> the TLB on context switch. That means our window for keeping the stale
> entries around is potentially much bigger and might not be a great idea.
>
>
> My understanding is that arm64 doesn't do invalidate the TLB during > context switch. The flush_tlb_page_nosync() here + DSB during context
Yes, we only perform a TLB flush when the ASID is exhausted during
context switch, and I think this is same with x86 IIUC.
> switch make sure the TLB is invalidated during context switch.
> So we can't remove flush_tlb_page_nosync() here? Or something was changed
> for arm64 (I have zero knowledge to TLB on arm64. So some obvious thing
> may be missed)? Thanks.
IMHO, the tlb can be easily evicted or flushed if the system is under
memory pressure, so like Barry said, the chance of reclaiming hot page
is relatively low, at least on X86, we did not see any heavy refault issue.
For MGLRU, it uses ptep_test_and_clear_young() instead of
ptep_clear_flush_young_notify(), and we did not find any problems until
now since deploying to ARM servers.
Powered by blists - more mailing lists