[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87sfjmlim3.fsf@mpe.ellerman.id.au>
Date: Mon, 17 Oct 2022 23:50:12 +1100
From: Michael Ellerman <mpe@...erman.id.au>
To: Arnd Bergmann <arnd@...db.de>,
Alexander Gordeev <agordeev@...ux.ibm.com>
Cc: Christophe Leroy <christophe.leroy@...roup.eu>,
Baoquan He <bhe@...hat.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Linux-Arch <linux-arch@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Christoph Hellwig <hch@...radead.org>,
wangkefeng.wang@...wei.com, schnelle@...ux.ibm.com,
David Laight <David.Laight@...lab.com>,
Stafford Horne <shorne@...il.com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>
Subject: Re: [RFC PATCH 7/8] mm/ioremap: Consider IOREMAP space in generic
ioremap
"Arnd Bergmann" <arnd@...db.de> writes:
> On Sun, Oct 16, 2022, at 9:54 AM, Alexander Gordeev wrote:
>> On Wed, Oct 12, 2022 at 12:39:11PM +0200, Arnd Bergmann wrote:
>>> "Some" means exactly powerpc64, right? It looks like microblaze
>>> and powerpc32 still share some of this code, but effectively
>>> just use the vmalloc area once the slab allocator is up.
>>>
>>> Is the special case still useful for powerpc64 or could this be
>>> changed to do it the same as everything else?
>>
>> Or make it the other way around and set IOREMAP_START/IOREMAP_END
>> to VMALLOC_START/VMALLOC_END by default?
>
> Sure, if there is a reason for actually making them different.
> From the git history, it appears that before commit 3d5134ee8341
> ("[POWERPC] Rewrite IO allocation & mapping on powerpc64"), the
> ioremap() and vmalloc() handling was largely duplicated. Ben
> cleaned it up by making most of the implementation shared but left
> the separate address spaces.
>
> My guess is that there was no technical reason for this, other
> than having no reason to change the behavior at the time.
I think the immediate reason for it is that on some CPUs we have to use
4K pages in the HPT for IO mappings, but PAGE_SIZE == 64K, and we can
only have a single page size per segment (256M or 1T).
cheers
Powered by blists - more mailing lists