[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <07f78e99-6e59-0bce-8ac0-50d7c7600461@oracle.com>
Date: Wed, 24 Jun 2020 17:30:37 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Bibo Mao <maobibo@...ngson.cn>,
Andrew Morton <akpm@...ux-foundation.org>,
Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
Paul Burton <paulburton@...nel.org>,
Anshuman Khandual <anshuman.khandual@....com>,
Mike Rapoport <rppt@...ux.ibm.com>,
Daniel Silsby <dansilsby@...il.com>
Cc: linux-mips@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH 2/3] mm/huge_memory.c: update tlb entry if pmd is changed
On 6/24/20 2:26 AM, Bibo Mao wrote:
> When set_pmd_at is called in function do_huge_pmd_anonymous_page,
> new tlb entry can be added by software on MIPS platform.
>
> Here add update_mmu_cache_pmd when pmd entry is set, and
> update_mmu_cache_pmd is defined as empty excepts arc/mips platform.
> This patch has no negative effect on other platforms except arc/mips
> system.
I am confused by this comment. It appears that update_mmu_cache_pmd
is defined as non-empty on arc, mips, powerpc and sparc architectures.
Am I missing something?
If those architectures do provide update_mmu_cache_pmd, then the previous
patch and this one now call update_mmu_cache_pmd with the actual faulting
address instead of the huge page aligned address. This was intentional
for mips. However, are there any potential issues on the other architectures?
I am no expert in any of those architectures. arc looks like it could be
problematic as update_mmu_cache_pmd calls update_mmu_cache and then
operates on (address & PAGE_MASK). That could now be different.
--
Mike Kravetz
>
> Signed-off-by: Bibo Mao <maobibo@...ngson.cn>
> ---
> mm/huge_memory.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 0f9187b..8b4ccf7 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -643,6 +643,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
> lru_cache_add_active_or_unevictable(page, vma);
> pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable);
> set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry);
> + update_mmu_cache_pmd(vma, vmf->address, vmf->pmd);
> add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
> mm_inc_nr_ptes(vma->vm_mm);
> spin_unlock(vmf->ptl);
> @@ -756,6 +757,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
> } else {
> set_huge_zero_page(pgtable, vma->vm_mm, vma,
> haddr, vmf->pmd, zero_page);
> + update_mmu_cache_pmd(vma, vmf->address, vmf->pmd);
> spin_unlock(vmf->ptl);
> set = true;
> }
>
Powered by blists - more mailing lists