lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a6eec165-9c48-56aa-8b05-6bf73411e4bf@gentwo.org>
Date: Wed, 12 Nov 2025 09:11:02 -0800 (PST)
From: "Christoph Lameter (Ampere)" <cl@...two.org>
To: Ryan Roberts <ryan.roberts@....com>
cc: Yang Shi <yang@...amperecomputing.com>, catalin.marinas@....com, 
    will@...nel.org, linux-arm-kernel@...ts.infradead.org, 
    linux-kernel@...r.kernel.org
Subject: Re: [v2 PATCH] arm64: mm: show direct mapping use in /proc/meminfo

On Wed, 12 Nov 2025, Ryan Roberts wrote:

> I have a long-term aspiration to enable "per-process page size", where each user
> space process can use a different page size. The first step is to be able to
> emulate a page size to the process which is larger than the kernel's. For that
> reason, I really dislike introducing new ABI that exposes the geometry of the
> kernel page tables to user space. I'd really like to be clear on what use case
> benefits from this sort of information before we add it.

One is user space where you want to "emulate" other page sizes and the
other is kernel space.

The per process page size is likely going to end up
being a per VMA page size since these address spaces can be shared and the
VMA is already containing information about huge pages, memory policies
and other stuff relatd to memory layout. And yes it would be great to have
an accounting of the page sizes used in a VMA.


> nit: arm64 tends to use the term "linear map" not "direct map". I'm not sure why
> or what the history is. Given this is arch-specific should we be aligning on the
> architecture's terminology here? I don't know...

Other architectures are already exposing this data via the terminology
used here. The information is useful for seeing if there is an issue with
small pages that could be impacting kernel performance. It is surprising
coming from oter architectures that this information is not readily
available.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ