[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <721fdf6e-61c4-2e31-c584-04d8380f2952@loongson.cn>
Date: Thu, 6 Nov 2025 17:53:41 +0800
From: Tianyang Zhang <zhangtianyang@...ngson.cn>
To: Bibo Mao <maobibo@...ngson.cn>, chenhuacai@...nel.org, kernel@...0n.name,
akpm@...ux-foundation.org, willy@...radead.org, david@...hat.com,
linmag7@...il.com, thuth@...hat.com, apopple@...dia.com
Cc: loongarch@...ts.linux.dev, linux-kernel@...r.kernel.org,
Liupu Wang <wangliupu@...ngson.cn>
Subject: Re: [PATCH] Loongarch:Make pte/pmd_modify can set _PAGE_MODIFIED
在 2025/11/6 下午3:07, Bibo Mao 写道:
>
>
> On 2025/11/4 下午3:30, Tianyang Zhang wrote:
>> In the current pte_modify operation, _PAGE_DIRTY might be cleared. Since
>> the hardware-page-walk does not have a predefined _PAGE_MODIFIED flag,
>> this could lead to loss of valid data in certain scenarios.
>>
>> The new modification involves checking whether the original PTE has the
>> _PAGE_DIRTY flag. If it exists, the _PAGE_MODIFIED bit is set, ensuring
>> that the pte_dirty interface can return accurate information.
>>
>> Co-developed-by: Liupu Wang <wangliupu@...ngson.cn>
>> Signed-off-by: Liupu Wang <wangliupu@...ngson.cn>
>> Signed-off-by: Tianyang Zhang <zhangtianyang@...ngson.cn>
>> ---
>> arch/loongarch/include/asm/pgtable.h | 17 +++++++++++++----
>> 1 file changed, 13 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/loongarch/include/asm/pgtable.h
>> b/arch/loongarch/include/asm/pgtable.h
>> index bd128696e96d..106abfa5183b 100644
>> --- a/arch/loongarch/include/asm/pgtable.h
>> +++ b/arch/loongarch/include/asm/pgtable.h
>> @@ -424,8 +424,13 @@ static inline unsigned long
>> pte_accessible(struct mm_struct *mm, pte_t a)
>> static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
>> {
>> - return __pte((pte_val(pte) & _PAGE_CHG_MASK) |
>> - (pgprot_val(newprot) & ~_PAGE_CHG_MASK));
>> + unsigned long val = (pte_val(pte) & _PAGE_CHG_MASK) |
>> + (pgprot_val(newprot) & ~_PAGE_CHG_MASK);
>> +
>> + if (pte_val(pte) & _PAGE_DIRTY)
>> + val |= _PAGE_MODIFIED;
> Since ptep_get_and_clear() is not atomic operation on LoongArch like
> other architectures, considering this scenery with HW PTW enabled:
> CPU 0: CPU1
> old_pte = ptep_modify_prot_start();
> old_pte = ptep_get(ptep);
>
> write(buf);
> *HW will set _PAGE_DIRTY bit*
> pte_clear(mm, address, ptep);
> ^^^^^^^^^^ For CPU1, bit _PAGE_DIRTY is no set in old_pte, _PAGE_DIRTY
> will be lost also. ^^^^^^^^^^^
> pte = pte_modify(old_pte,)
> ptep_modify_prot_commit(.., pte)
There does appear to be an issue here. It seems we should use
`__HAVE_ARCH_PTEP_GET_AND_CLEAR` and implement ptep_get_and_clear via
`atomic_xchg`.
However, I believe this change should be submitted in a new patch.
Thanks
Tianyang
>
> Regards
> Bibo Mao
>> +
>> + return __pte(val);
>> }
>> extern void __update_tlb(struct vm_area_struct *vma,
>> @@ -547,9 +552,13 @@ static inline struct page *pmd_page(pmd_t pmd)
>> static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
>> {
>> - pmd_val(pmd) = (pmd_val(pmd) & _HPAGE_CHG_MASK) |
>> + unsigned long val = (pmd_val(pmd) & _HPAGE_CHG_MASK) |
>> (pgprot_val(newprot) & ~_HPAGE_CHG_MASK);
>> - return pmd;
>> +
>> + if (pmd_val(pmd) & _PAGE_DIRTY)
>> + val |= _PAGE_MODIFIED;
>> +
>> + return __pmd(val);
>> }
>> static inline pmd_t pmd_mkinvalid(pmd_t pmd)
>>
Powered by blists - more mailing lists