lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2537ad07-6e49-401b-9ffa-63a07740db4a@intel.com>
Date: Mon, 8 Sep 2025 12:17:58 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
 pbonzini@...hat.com, seanjc@...gle.com, dave.hansen@...ux.intel.com
Cc: rick.p.edgecombe@...el.com, isaku.yamahata@...el.com,
 kai.huang@...el.com, yan.y.zhao@...el.com, chao.gao@...el.com,
 tglx@...utronix.de, mingo@...hat.com, bp@...en8.de, kvm@...r.kernel.org,
 x86@...nel.org, linux-coco@...ts.linux.dev, linux-kernel@...r.kernel.org
Subject: Re: [PATCHv2 00/12] TDX: Enable Dynamic PAMT

On 6/9/25 12:13, Kirill A. Shutemov wrote:
> The exact size of the required PAMT memory is determined by the TDX
> module and may vary between TDX module versions, but currently it is
> approximately 0.4% of the system memory. This is a significant
> commitment, especially if it is not known upfront whether the machine
> will run any TDX guests.
> 
> The Dynamic PAMT feature reduces static PAMT allocations. PAMT_1G and
> PAMT_2M levels are still allocated on TDX module initialization, but the
> PAMT_4K level is allocated dynamically, reducing static allocations to
> approximately 0.004% of the system memory.

I'm beginning to think that this series is barking up the wrong tree.

>  18 files changed, 702 insertions(+), 108 deletions(-)

I totally agree that saving 0.4% of memory on a gagillion systems saves
a gagillion dollars.

But this series seems to be marching down the path that the savings
needs to be down at page granularity: If a 2M page doesn't need PAMT
then it strictly shouldn't have any PAMT. While that's certainly a
high-utiltiy tact, I can't help but think it may be over complicated.

What if we just focused on three states:

1. System boots, has no DPAMT.
2. First TD starts up, all DPAMT gets allocated
3. Last TD shuts down, all DPAMT gets freed

The cases that leaves behind are when the system has a small number of
TDs packed into a relatively small number of 2M pages. That occurs
either because they're backing with real huge pages or that they are
backed with 4k and nicely compacted because memory wasn't fragmented.

I know our uberscaler buddies are quite fond of those cases and want to
minimize memory use. But are you folks really going to have that many
systems which deploy a very small number of small TDs?

In other words, can we simplify this? Or at least _start_ simpler with v1?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ