[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJF2gTRVG+yX7fktLru4U=OVKrTg73kTR5hirw1hh1P9c+MNOQ@mail.gmail.com>
Date: Tue, 15 Aug 2023 11:11:52 +0800
From: Guo Ren <guoren@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>
Cc: Guo Ren <guoren@...ux.alibaba.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux Next Mailing List <linux-next@...r.kernel.org>,
Stephen Rothwell <sfr@...b.auug.org.au>
Subject: Re: linux-next: manual merge of the csky tree with the mm tree
On Tue, Aug 15, 2023 at 8:46 AM Stephen Rothwell <sfr@...b.auug.org.au> wrote:
>
> Hi all,
>
> Today's linux-next merge of the csky tree got a conflict in:
>
> arch/csky/abiv2/cacheflush.c
>
> between commit:
>
> 1222e1310d64 ("csky: implement the new page table range API")
Could I take this patch into csky next tree to solve the conflict.
>
> from the mm tree and commit:
>
> 1362d15ffb59 ("csky: pgtable: Invalidate stale I-cache lines in update_mmu_cache")
>
> from the csky tree.
>
> I fixed it up (I think - see below) and can carry the fix as
> necessary. This is now fixed as far as linux-next is concerned, but any
> non trivial conflicts should be mentioned to your upstream maintainer
> when your tree is submitted for merging. You may also want to consider
> cooperating with the maintainer of the conflicting tree to minimise any
> particularly complex conflicts.
>
> --
> Cheers,
> Stephen Rothwell
>
> diff --cc arch/csky/abiv2/cacheflush.c
> index d05a551af5d5,500eb8f69397..000000000000
> --- a/arch/csky/abiv2/cacheflush.c
> +++ b/arch/csky/abiv2/cacheflush.c
> @@@ -16,23 -15,22 +16,22 @@@ void update_mmu_cache_range(struct vm_f
>
> flush_tlb_page(vma, address);
>
> - if (!pfn_valid(pte_pfn(*pte)))
> + if (!pfn_valid(pfn))
> return;
>
> - page = pfn_to_page(pte_pfn(*pte));
> - if (page == ZERO_PAGE(0))
> + folio = page_folio(pfn_to_page(pfn));
> +
> + if (test_and_set_bit(PG_dcache_clean, &folio->flags))
> return;
>
> - if (test_and_set_bit(PG_dcache_clean, &page->flags))
> - return;
> + for (i = 0; i < folio_nr_pages(folio); i++) {
> + unsigned long addr = (unsigned long) kmap_local_folio(folio,
> + i * PAGE_SIZE);
>
> - addr = (unsigned long) kmap_atomic(page);
> -
> - icache_inv_range(address, address + PAGE_SIZE);
> - dcache_wb_range(addr, addr + PAGE_SIZE);
> -
> - kunmap_atomic((void *) addr);
> ++ icache_inv_range(address, address + PAGE_SIZE);
> + dcache_wb_range(addr, addr + PAGE_SIZE);
> - if (vma->vm_flags & VM_EXEC)
> - icache_inv_range(addr, addr + PAGE_SIZE);
> + kunmap_local((void *) addr);
> + }
> }
>
> void flush_icache_deferred(struct mm_struct *mm)
--
Best Regards
Guo Ren
Powered by blists - more mailing lists