[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <98C61C92-0D24-41C6-B9DA-8335B34D3B07@konsulko.com>
Date: Mon, 10 Sep 2018 13:05:41 -0700
From: Dan Malek <dan.malek@...sulko.com>
To: Christophe Leroy <christophe.leroy@....fr>
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org,
aneesh.kumar@...ux.vnet.ibm.com,
Nicholas Piggin <npiggin@...il.com>,
Michael Ellerman <mpe@...erman.id.au>,
linuxppc-dev@...ts.ozlabs.org, LKML <linux-kernel@...r.kernel.org>
Subject: Re: How to handle PTE tables with non contiguous entries ?
Hello Cristophe.
> On Sep 10, 2018, at 7:34 AM, Christophe Leroy <christophe.leroy@....fr> wrote:
>
> On the powerpc8xx, handling 16k size pages requires to have page tables with 4 identical entries.
Do you think a 16k page is useful? Back in the day, the goal was to keep the fault handling and management overhead as simple and generic as possible, as you know this affects the system performance. I understand there would be fewer page faults and more efficient use of the MMU resources with 16k, but if this comes at an overhead cost, is it really worth it?
In addition to the normal 4k mapping, I had thought about using 512k mapping, which could be easily detected at level 2 (PMD), with a single entry loaded into the MMU. We would need an aux header or something from the executable/library to assist with knowing when this could be done. I never got around to it. :)
The 8xx platforms tended to have smaller memory resources, so the 4k granularity was also useful in making better use of the available space.
> Would someone have an idea of an elegent way to handle that ?
My suggestion would be to not change the PTE table, but have the fault handler detect a 16k page and load any one of the four entries based upon miss offset. Kinda use the same 4k miss hander, but with 16k knowledge. You wouldn’t save any PTE table space, but the MMU efficiency may be worth it. As I recall, the hardware may ignore/mask any LS bits, and there is PMD level information to utilize as well.
It’s been a long time since I’ve investigated how things have evolved, glad it’s still in use, and I hope you at least have some fun with the development :)
Thanks.
— Dan
Powered by blists - more mailing lists