[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87o8pzgfws.fsf@mpe.ellerman.id.au>
Date: Thu, 04 Jun 2020 22:43:31 +1000
From: Michael Ellerman <mpe@...erman.id.au>
To: Stephen Rothwell <sfr@...b.auug.org.au>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Linux Next Mailing List <linux-next@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
PowerPC <linuxppc-dev@...ts.ozlabs.org>
Subject: Re: linux-next: fix ups for clashes between akpm and powerpc trees
Stephen Rothwell <sfr@...b.auug.org.au> writes:
> Hi all,
>
> On Thu, 4 Jun 2020 16:52:46 +1000 Stephen Rothwell <sfr@...b.auug.org.au> wrote:
>>
>> diff --git a/arch/powerpc/mm/kasan/8xx.c b/arch/powerpc/mm/kasan/8xx.c
>> index db4ef44af22f..569d98a41881 100644
>> --- a/arch/powerpc/mm/kasan/8xx.c
>> +++ b/arch/powerpc/mm/kasan/8xx.c
>> @@ -10,7 +10,7 @@
>> static int __init
>> kasan_init_shadow_8M(unsigned long k_start, unsigned long k_end, void *block)
>> {
>> - pmd_t *pmd = pmd_ptr_k(k_start);
>> + pmd_t *pmd = pmd_off_k(k_start);
>> unsigned long k_cur, k_next;
>>
>> for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd += 2, block += SZ_8M) {
>> @@ -59,7 +59,7 @@ int __init kasan_init_region(void *start, size_t size)
>> return ret;
>>
>> for (; k_cur < k_end; k_cur += PAGE_SIZE) {
>> - pmd_t *pmd = pmd_ptr_k(k_cur);
>> + pmd_t *pmd = pmd_off_k(k_cur);
>> void *va = block + k_cur - k_start;
>> pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
>>
>> diff --git a/arch/powerpc/mm/kasan/book3s_32.c b/arch/powerpc/mm/kasan/book3s_32.c
>> index 4bc491a4a1fd..a32b4640b9de 100644
>> --- a/arch/powerpc/mm/kasan/book3s_32.c
>> +++ b/arch/powerpc/mm/kasan/book3s_32.c
>> @@ -46,7 +46,7 @@ int __init kasan_init_region(void *start, size_t size)
>> kasan_update_early_region(k_start, k_cur, __pte(0));
>>
>> for (; k_cur < k_end; k_cur += PAGE_SIZE) {
>> - pmd_t *pmd = pmd_ptr_k(k_cur);
>> + pmd_t *pmd = pmd_off_k(k_cur);
>> void *va = block + k_cur - k_start;
>> pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
>>
>> diff --git a/arch/powerpc/mm/nohash/8xx.c b/arch/powerpc/mm/nohash/8xx.c
>> index 286441bbbe49..92e8929cbe3e 100644
>> --- a/arch/powerpc/mm/nohash/8xx.c
>> +++ b/arch/powerpc/mm/nohash/8xx.c
>> @@ -74,7 +74,7 @@ static pte_t __init *early_hugepd_alloc_kernel(hugepd_t *pmdp, unsigned long va)
>> static int __ref __early_map_kernel_hugepage(unsigned long va, phys_addr_t pa,
>> pgprot_t prot, int psize, bool new)
>> {
>> - pmd_t *pmdp = pmd_ptr_k(va);
>> + pmd_t *pmdp = pmd_off_k(va);
>> pte_t *ptep;
>>
>> if (WARN_ON(psize != MMU_PAGE_512K && psize != MMU_PAGE_8M))
>> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
>> index 45a0556089e8..1136257c3a99 100644
>> --- a/arch/powerpc/mm/pgtable.c
>> +++ b/arch/powerpc/mm/pgtable.c
>> @@ -264,7 +264,7 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
>> #if defined(CONFIG_PPC_8xx)
>> void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte)
>> {
>> - pmd_t *pmd = pmd_ptr(mm, addr);
>> + pmd_t *pmd = pmd_off(mm, addr);
>> pte_basic_t val;
>> pte_basic_t *entry = &ptep->pte;
>> int num = is_hugepd(*((hugepd_t *)pmd)) ? 1 : SZ_512K / SZ_4K;
>> diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
>> index e2d054c9575e..6eb4eab79385 100644
>> --- a/arch/powerpc/mm/pgtable_32.c
>> +++ b/arch/powerpc/mm/pgtable_32.c
>> @@ -40,7 +40,7 @@ notrace void __init early_ioremap_init(void)
>> {
>> unsigned long addr = ALIGN_DOWN(FIXADDR_START, PGDIR_SIZE);
>> pte_t *ptep = (pte_t *)early_fixmap_pagetable;
>> - pmd_t *pmdp = pmd_ptr_k(addr);
>> + pmd_t *pmdp = pmd_off_k(addr);
>>
>> for (; (s32)(FIXADDR_TOP - addr) > 0;
>> addr += PGDIR_SIZE, ptep += PTRS_PER_PTE, pmdp++)
>
> I have added the above hunks as to linux-next for tomorrow as a fix for
> mm-pgtable-add-shortcuts-for-accessing-kernel-pmd-and-pte.
Looks good. Thanks.
cheers
Powered by blists - more mailing lists