[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8f7b3f4e-bf56-4030-952f-962291e53ccc@arm.com>
Date: Thu, 18 Sep 2025 16:15:52 +0200
From: Kevin Brodsky <kevin.brodsky@....com>
To: Yang Shi <yang@...amperecomputing.com>, linux-hardening@...r.kernel.org,
Rick Edgecombe <rick.p.edgecombe@...el.com>
Cc: linux-kernel@...r.kernel.org, Andrew Morton <akpm@...ux-foundation.org>,
Andy Lutomirski <luto@...nel.org>, Catalin Marinas
<catalin.marinas@....com>, Dave Hansen <dave.hansen@...ux.intel.com>,
David Hildenbrand <david@...hat.com>, Ira Weiny <ira.weiny@...el.com>,
Jann Horn <jannh@...gle.com>, Jeff Xu <jeffxu@...omium.org>,
Joey Gouly <joey.gouly@....com>, Kees Cook <kees@...nel.org>,
Linus Walleij <linus.walleij@...aro.org>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, Marc Zyngier <maz@...nel.org>,
Mark Brown <broonie@...nel.org>, Matthew Wilcox <willy@...radead.org>,
Maxwell Bland <mbland@...orola.com>, "Mike Rapoport (IBM)"
<rppt@...nel.org>, Peter Zijlstra <peterz@...radead.org>,
Pierre Langlois <pierre.langlois@....com>,
Quentin Perret <qperret@...gle.com>, Ryan Roberts <ryan.roberts@....com>,
Thomas Gleixner <tglx@...utronix.de>, Vlastimil Babka <vbabka@...e.cz>,
Will Deacon <will@...nel.org>, linux-arm-kernel@...ts.infradead.org,
linux-mm@...ck.org, x86@...nel.org
Subject: Re: [RFC PATCH v5 00/18] pkeys-based page table hardening
On 25/08/2025 09:31, Kevin Brodsky wrote:
>>> Note: the performance impact of set_memory_pkey() is likely to be
>>> relatively low on arm64 because the linear mapping uses PTE-level
>>> descriptors only. This means that set_memory_pkey() simply changes the
>>> attributes of some PTE descriptors. However, some systems may be able to
>>> use higher-level descriptors in the future [5], meaning that
>>> set_memory_pkey() may have to split mappings. Allocating page tables
>> I'm supposed the page table hardening feature will be opt-in due to
>> its overhead? If so I think you can just keep kernel linear mapping
>> using PTE, just like debug page alloc.
> Indeed, I don't expect it to be turned on by default (in defconfig). If
> the overhead proves too large when block mappings are used, it seems
> reasonable to force PTE mappings when kpkeys_hardened_pgtables is enabled.
I had a closer look at what happens when the linear map uses block
mappings, rebasing this series on top of [1]. Unfortunately, this is
worse than I thought: it does not work at all as things stand.
The main issue is that calling set_memory_pkey() in pagetable_*_ctor()
can cause the linear map to be split, which requires new PTP(s) to be
allocated, which means more nested call(s) to set_memory_pkey(). This
explodes as a non-recursive lock is taken on that path.
More fundamentally, this cannot work unless we can explicitly allocate
PTPs from either:
1. A pool of PTE-mapped pages
2. A pool of memory that is already mapped with the right pkey (at any
level)
This is where I have to apologise to Rick for not having studied his
series more thoroughly, as patch 17 [2] covers this issue very well in
the commit message.
It seems fair to say there is no ideal or simple solution, though.
Rick's patch reserves enough (PTE-mapped) memory for fully splitting the
linear map, which is relatively simple but not very pleasant. Chatting
with Ryan Roberts, we figured another approach, improving on solution 1
mentioned in [2]. It would rely on allocating all PTPs from a special
pool (without using set_memory_pkey() in pagetable_*_ctor), along those
lines:
1. 2 pages are reserved at all times (with the appropriate pkey)
2. Try to allocate a 2M block. If needed, use a reserved page as PMD to
split a PUD. If successful, set its pkey - the entire block can now be
used for PTPs. Replenish the reserve from the block if needed.
3. If no block is available, make an order-2 allocation (4 pages). If
needed, use 1-2 reserved pages to split PUD/PMD. Set the pkey of the 4
pages, take 1-2 pages to replenish the reserve if needed.
This ensures that we never run out of PTPs for splitting. We may get
into an OOM situation more easily due to the order-2 requirement, but
the risk remains low compared to requiring a 2M block. A bigger concern
is concurrency - do we need a per-CPU cache? Reserving a 2M block per
CPU could be very much overkill.
No matter which solution is used, this clearly increases the complexity
of kpkeys_hardened_pgtables. Mike Rapoport has posted a number of RFCs
[3][4] that aim at addressing this problem more generally, but no
consensus seems to have emerged and I'm not sure they would completely
solve this specific problem either.
For now, my plan is to stick to solution 3 from [2], i.e. force the
linear map to be PTE-mapped. This is easily done on arm64 as it is the
default, and is required for rodata=full, unless [1] is applied and the
system supports BBML2_NOABORT. See [1] for the potential performance
improvements we'd be missing out on (~5% ballpark). I'm not quite sure
what the picture looks like on x86 - it may well be more significant as
Rick suggested.
- Kevin
[1]
https://lore.kernel.org/all/20250829115250.2395585-1-ryan.roberts@arm.com/
[2]
https://lore.kernel.org/all/20210830235927.6443-18-rick.p.edgecombe@intel.com/
[3] https://lore.kernel.org/lkml/20210823132513.15836-1-rppt@kernel.org/
[4] https://lore.kernel.org/all/20230308094106.227365-1-rppt@kernel.org/
Powered by blists - more mailing lists