[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0e6d1f1f-a917-4e36-80de-03ba94c6d850@arm.com>
Date: Mon, 20 Oct 2025 07:39:51 +0530
From: Anshuman Khandual <anshuman.khandual@....com>
To: Catalin Marinas <catalin.marinas@....com>,
Huang Ying <ying.huang@...ux.alibaba.com>
Cc: Will Deacon <will@...nel.org>, Ryan Roberts <ryan.roberts@....com>,
Gavin Shan <gshan@...hat.com>, Ard Biesheuvel <ardb@...nel.org>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
Yicong Yang <yangyicong@...ilicon.com>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH] arm64, mm: avoid always making PTE dirty in pte_mkwrite()
On 17/10/25 11:36 PM, Catalin Marinas wrote:
> On Wed, Oct 15, 2025 at 10:37:12AM +0800, Huang Ying wrote:
>> Current pte_mkwrite_novma() makes PTE dirty unconditionally. This may
>> mark some pages that are never written dirty wrongly. For example,
>> do_swap_page() may map the exclusive pages with writable and clean PTEs
>> if the VMA is writable and the page fault is for read access.
>> However, current pte_mkwrite_novma() implementation always dirties the
>> PTE. This may cause unnecessary disk writing if the pages are
>> never written before being reclaimed.
>>
>> So, change pte_mkwrite_novma() to clear the PTE_RDONLY bit only if the
>> PTE_DIRTY bit is set to make it possible to make the PTE writable and
>> clean.
>>
>> The current behavior was introduced in commit 73e86cb03cf2 ("arm64:
>> Move PTE_RDONLY bit handling out of set_pte_at()"). Before that,
>> pte_mkwrite() only sets the PTE_WRITE bit, while set_pte_at() only
>> clears the PTE_RDONLY bit if both the PTE_WRITE and the PTE_DIRTY bits
>> are set.
>>
>> To test the performance impact of the patch, on an arm64 server
>> machine, run 16 redis-server processes on socket 1 and 16
>> memtier_benchmark processes on socket 0 with mostly get
>> transactions (that is, redis-server will mostly read memory only).
>> The memory footprint of redis-server is larger than the available
>> memory, so swap out/in will be triggered. Test results show that the
>> patch can avoid most swapping out because the pages are mostly clean.
>> And the benchmark throughput improves ~23.9% in the test.
>>
>> Fixes: 73e86cb03cf2 ("arm64: Move PTE_RDONLY bit handling out of set_pte_at()")
>> Signed-off-by: Huang Ying <ying.huang@...ux.alibaba.com>
>> Cc: Catalin Marinas <catalin.marinas@....com>
>> Cc: Will Deacon <will@...nel.org>
>> Cc: Anshuman Khandual <anshuman.khandual@....com>
>> Cc: Ryan Roberts <ryan.roberts@....com>
>> Cc: Gavin Shan <gshan@...hat.com>
>> Cc: Ard Biesheuvel <ardb@...nel.org>
>> Cc: "Matthew Wilcox (Oracle)" <willy@...radead.org>
>> Cc: Yicong Yang <yangyicong@...ilicon.com>
>> Cc: linux-arm-kernel@...ts.infradead.org
>> Cc: linux-kernel@...r.kernel.org
>> ---
>> arch/arm64/include/asm/pgtable.h | 3 ++-
>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>> index aa89c2e67ebc..0944e296dd4a 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -293,7 +293,8 @@ static inline pmd_t set_pmd_bit(pmd_t pmd, pgprot_t prot)
>> static inline pte_t pte_mkwrite_novma(pte_t pte)
>> {
>> pte = set_pte_bit(pte, __pgprot(PTE_WRITE));
>> - pte = clear_pte_bit(pte, __pgprot(PTE_RDONLY));
>> + if (pte_sw_dirty(pte))
>> + pte = clear_pte_bit(pte, __pgprot(PTE_RDONLY));
>> return pte;
>> }
>
> This seems to be the right thing. I recall years ago I grep'ed
> (obviously not hard enough) and most pte_mkwrite() places had a
> pte_mkdirty(). But I missed do_swap_page() and possibly others.
>
> For this patch:
>
> Reviewed-by: Catalin Marinas <catalin.marinas@....com>
>
> I wonder whether we should also add (as a separate patch):
>
> diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
> index 830107b6dd08..df1c552ef11c 100644
> --- a/mm/debug_vm_pgtable.c
> +++ b/mm/debug_vm_pgtable.c
> @@ -101,6 +101,7 @@ static void __init pte_basic_tests(struct pgtable_debug_args *args, int idx)
> WARN_ON(pte_dirty(pte_mkclean(pte_mkdirty(pte))));
> WARN_ON(pte_write(pte_wrprotect(pte_mkwrite(pte, args->vma))));
> WARN_ON(pte_dirty(pte_wrprotect(pte_mkclean(pte))));
> + WARN_ON(pte_dirty(pte_mkwrite_novma(pte_mkclean(pte))));
> WARN_ON(!pte_dirty(pte_wrprotect(pte_mkdirty(pte))));
> }
>
> For completeness, also (and maybe other combinations):
>
> WARN_ON(!pte_write(pte_mkdirty(pte_mkwrite_novma(pte))));
Adding similar tests to pte_wrprotect().
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index 830107b6dd08..573632ebf304 100644
--- a/mm/debug_vm_pgtable.c
+++ b/mm/debug_vm_pgtable.c
@@ -102,6 +102,11 @@ static void __init pte_basic_tests(struct pgtable_debug_args *args, int idx)
WARN_ON(pte_write(pte_wrprotect(pte_mkwrite(pte, args->vma))));
WARN_ON(pte_dirty(pte_wrprotect(pte_mkclean(pte))));
WARN_ON(!pte_dirty(pte_wrprotect(pte_mkdirty(pte))));
+
+ WARN_ON(pte_dirty(pte_mkwrite_novma(pte_mkclean(pte))));
+ WARN_ON(!pte_write(pte_mkdirty(pte_mkwrite_novma(pte))));
+ WARN_ON(!pte_write(pte_mkwrite_novma(pte_wrprotect(pte))));
+ WARN_ON(pte_write(pte_wrprotect(pte_mkwrite_novma(pte))));
}
static void __init pte_advanced_tests(struct pgtable_debug_args *args)
@@ -195,6 +200,9 @@ static void __init pmd_basic_tests(struct pgtable_debug_args *args, int idx)
WARN_ON(pmd_write(pmd_wrprotect(pmd_mkwrite(pmd, args->vma))));
WARN_ON(pmd_dirty(pmd_wrprotect(pmd_mkclean(pmd))));
WARN_ON(!pmd_dirty(pmd_wrprotect(pmd_mkdirty(pmd))));
+
+ WARN_ON(!pmd_write(pmd_mkwrite_novma(pmd_wrprotect(pmd))));
+ WARN_ON(pmd_write(pmd_wrprotect(pmd_mkwrite_novma(pmd))));
/*
* A huge page does not point to next level page table
* entry. Hence this must qualify as pmd_bad().
>
> I cc'ed linux-mm in case we missed anything. If nothing raised, I'll
> queue it next week.
>
> Thanks.
>
Powered by blists - more mailing lists