[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1c54d958-9da2-97d0-e9a8-7629d4a3f7bd@loongson.cn>
Date: Fri, 18 Mar 2022 09:17:15 +0800
From: maobibo <maobibo@...ngson.cn>
To: David Hildenbrand <david@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Anshuman Khandual <anshuman.khandual@....com>
Subject: Re: [PATCH v2] mm: add access/dirty bit on numa page fault
On 03/17/2022 08:32 PM, David Hildenbrand wrote:
> On 17.03.22 07:50, Bibo Mao wrote:
>> On platforms like x86/arm which supports hw page walking, access
>> and dirty bit is set by hw, however on some platforms without
>> such hw functions, access and dirty bit is set by software in
>> next trap.
>>
>> During numa page fault, dirty bit can be added for old pte if
>> fail to migrate on write fault. And if it succeeds to migrate,
>> access bit can be added for migrated new pte, also dirty bit
>> can be added for write fault.
>>
>> Signed-off-by: Bibo Mao <maobibo@...ngson.cn>
>> ---
>> mm/memory.c | 21 ++++++++++++++++++++-
>> 1 file changed, 20 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index c125c4969913..65813bec9c06 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -4404,6 +4404,22 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
>> if (migrate_misplaced_page(page, vma, target_nid)) {
>> page_nid = target_nid;
>> flags |= TNF_MIGRATED;
>> +
>> + /*
>> + * update pte entry with access bit, and dirty bit for
>> + * write fault
>> + */
>> + spin_lock(vmf->ptl);
>
> Ehm, are you sure? We did a pte_unmap_unlock(), so you most certainly need a
>
> vmf->pte = pte_offset_map(vmf->pmd, vmf->address);
yes, we need probe pte entry again after function pte_unmap_unlock().
>
>
> Also, don't we need pte_same() checks before we do anything after
> dropping the PT lock?
I do not think so. If page succeeds in migration, pte entry should be changed
also, it should be different.
regards
bibo,mao
>
>> + pte = *vmf->pte;
>> + pte = pte_mkyoung(pte);
>> + if (was_writable) {
>> + pte = pte_mkwrite(pte);
>> + if (vmf->flags & FAULT_FLAG_WRITE)
>> + pte = pte_mkdirty(pte);
>> + }
>> + set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
>> + update_mmu_cache(vma, vmf->address, vmf->pte);
>> + pte_unmap_unlock(vmf->pte, vmf->ptl);
>> } else {
>> flags |= TNF_MIGRATE_FAIL;
>> vmf->pte = pte_offset_map(vmf->pmd, vmf->address);
>> @@ -4427,8 +4443,11 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
>> old_pte = ptep_modify_prot_start(vma, vmf->address, vmf->pte);
>> pte = pte_modify(old_pte, vma->vm_page_prot);
>> pte = pte_mkyoung(pte);
>> - if (was_writable)
>> + if (was_writable) {
>> pte = pte_mkwrite(pte);
>> + if (vmf->flags & FAULT_FLAG_WRITE)
>> + pte = pte_mkdirty(pte);
>> + }
>> ptep_modify_prot_commit(vma, vmf->address, vmf->pte, old_pte, pte);
>> update_mmu_cache(vma, vmf->address, vmf->pte);
>> pte_unmap_unlock(vmf->pte, vmf->ptl);
>
>
Powered by blists - more mailing lists