[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9425f83a-e923-b119-cf7a-6b9a9f8dd8b7@csgroup.eu>
Date: Tue, 28 Feb 2023 06:58:18 +0000
From: Christophe Leroy <christophe.leroy@...roup.eu>
To: Matthew Wilcox <willy@...radead.org>
CC: "linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Michael Ellerman <mpe@...erman.id.au>,
Nicholas Piggin <npiggin@...il.com>,
"linuxppc-dev@...ts.ozlabs.org" <linuxppc-dev@...ts.ozlabs.org>
Subject: Re: [PATCH v2 18/30] powerpc: Implement the new page table range API
Le 27/02/2023 à 21:20, Matthew Wilcox a écrit :
> On Mon, Feb 27, 2023 at 07:45:08PM +0000, Christophe Leroy wrote:
>> Hi,
>>
>> Le 27/02/2023 à 18:57, Matthew Wilcox (Oracle) a écrit :
>>> Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio().
>>> Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to
>>> per-folio.
>>>
>>> I'm unsure about my merging of flush_dcache_icache_hugepage() and
>>> flush_dcache_icache_page() into flush_dcache_icache_folio() and subsequent
>>> removal of flush_dcache_icache_phys(). Please review.
>>
>> Not sure why you want to remove flush_dcache_icache_phys().
>
> Well, I didn't, necessarily. It's just that when I merged
> flush_dcache_icache_hugepage() and flush_dcache_icache_page()
> together, it was left with no callers.
>
>> Allthough that's only feasible when address bus is not wider than 32
>> bits and cannot be done on BOOKE as you can't switch off MMU on BOOKE,
>> flush_dcache_icache_phys() allows to flush not mapped pages without
>> having to map them. So it is more efficient.
>
> And it was just never done for the hugepage case?
I think on PPC32 hugepages are available only on 8xx and BOOKE. 8xx
doesn't have HIGHMEM and BOOKE cannot switch MMU off. So there is no use
case for flush_dcache_icache_phys() with hugepages.
>
>>> @@ -148,17 +103,20 @@ static void __flush_dcache_icache(void *p)
>>> invalidate_icache_range(addr, addr + PAGE_SIZE);
>>> }
>>>
>>> -static void flush_dcache_icache_hugepage(struct page *page)
>>> +void flush_dcache_icache_folio(struct folio *folio)
>>> {
>>> - int i;
>>> - int nr = compound_nr(page);
>>> + unsigned int i, nr = folio_nr_pages(folio);
>>>
>>> - if (!PageHighMem(page)) {
>>> + if (flush_coherent_icache())
>>> + return;
>>> +
>>> + if (!folio_test_highmem(folio)) {
>>> + void *addr = folio_address(folio);
>>> for (i = 0; i < nr; i++)
>>> - __flush_dcache_icache(lowmem_page_address(page + i));
>>> + __flush_dcache_icache(addr + i * PAGE_SIZE);
>>> } else {
>>> for (i = 0; i < nr; i++) {
>>> - void *start = kmap_local_page(page + i);
>>> + void *start = kmap_local_folio(folio, i * PAGE_SIZE);
>>>
>>> __flush_dcache_icache(start);
>>> kunmap_local(start);
>
> So you'd like this to be:
>
> } else if (IS_ENABLED(CONFIG_BOOKE) || sizeof(phys_addr_t) > sizeof(void *)) {
> for (i = 0; i < nr; i++) {
> void *start = kmap_local_folio(folio, i * PAGE_SIZE);
> __flush_dcache_icache(start);
> kunmap_local(start);
> }
> } else {
> unsigned long pfn = folio_pfn(folio);
> for (i = 0; i < nr; i++)
> flush_dcache_icache_phys((pfn + i) * PAGE_SIZE;
> }
>
> (or maybe you'd prefer a flush_dcache_icache_pfn() that doesn't need to
> worry about PAGE_MASK).
Yes looks good.
Christophe
Powered by blists - more mailing lists