[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aCd5wZ_Tp863I6pP@google.com>
Date: Fri, 16 May 2025 10:51:50 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Rick P Edgecombe <rick.p.edgecombe@...el.com>
Cc: Vishal Annapurve <vannapurve@...gle.com>, "pvorel@...e.cz" <pvorel@...e.cz>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>, "catalin.marinas@....com" <catalin.marinas@....com>,
Jun Miao <jun.miao@...el.com>, "palmer@...belt.com" <palmer@...belt.com>,
"pdurrant@...zon.co.uk" <pdurrant@...zon.co.uk>, "vbabka@...e.cz" <vbabka@...e.cz>,
"peterx@...hat.com" <peterx@...hat.com>, "x86@...nel.org" <x86@...nel.org>,
"amoorthy@...gle.com" <amoorthy@...gle.com>, "jack@...e.cz" <jack@...e.cz>, "maz@...nel.org" <maz@...nel.org>,
"tabba@...gle.com" <tabba@...gle.com>, "vkuznets@...hat.com" <vkuznets@...hat.com>,
"quic_svaddagi@...cinc.com" <quic_svaddagi@...cinc.com>,
"mail@...iej.szmigiero.name" <mail@...iej.szmigiero.name>, "hughd@...gle.com" <hughd@...gle.com>,
"quic_eberman@...cinc.com" <quic_eberman@...cinc.com>, Wei W Wang <wei.w.wang@...el.com>,
"keirf@...gle.com" <keirf@...gle.com>, Maciej Wieczor-Retman <maciej.wieczor-retman@...el.com>,
Yan Y Zhao <yan.y.zhao@...el.com>, Dave Hansen <dave.hansen@...el.com>,
"ajones@...tanamicro.com" <ajones@...tanamicro.com>, "rppt@...nel.org" <rppt@...nel.org>,
"quic_mnalajal@...cinc.com" <quic_mnalajal@...cinc.com>, "aik@....com" <aik@....com>,
"usama.arif@...edance.com" <usama.arif@...edance.com>, "fvdl@...gle.com" <fvdl@...gle.com>,
"paul.walmsley@...ive.com" <paul.walmsley@...ive.com>,
"quic_cvanscha@...cinc.com" <quic_cvanscha@...cinc.com>, "nsaenz@...zon.es" <nsaenz@...zon.es>,
"willy@...radead.org" <willy@...radead.org>, Fan Du <fan.du@...el.com>,
"anthony.yznaga@...cle.com" <anthony.yznaga@...cle.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"thomas.lendacky@....com" <thomas.lendacky@....com>, "mic@...ikod.net" <mic@...ikod.net>,
"oliver.upton@...ux.dev" <oliver.upton@...ux.dev>, Kirill Shutemov <kirill.shutemov@...el.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>, "steven.price@....com" <steven.price@....com>,
"binbin.wu@...ux.intel.com" <binbin.wu@...ux.intel.com>, "muchun.song@...ux.dev" <muchun.song@...ux.dev>,
Zhiquan1 Li <zhiquan1.li@...el.com>, "rientjes@...gle.com" <rientjes@...gle.com>,
"mpe@...erman.id.au" <mpe@...erman.id.au>, Erdem Aktas <erdemaktas@...gle.com>,
"david@...hat.com" <david@...hat.com>, "jgg@...pe.ca" <jgg@...pe.ca>,
"bfoster@...hat.com" <bfoster@...hat.com>, "jhubbard@...dia.com" <jhubbard@...dia.com>,
Haibo1 Xu <haibo1.xu@...el.com>, "anup@...infault.org" <anup@...infault.org>,
Isaku Yamahata <isaku.yamahata@...el.com>, "jthoughton@...gle.com" <jthoughton@...gle.com>,
"will@...nel.org" <will@...nel.org>, "steven.sistare@...cle.com" <steven.sistare@...cle.com>,
"quic_pheragu@...cinc.com" <quic_pheragu@...cinc.com>, "jarkko@...nel.org" <jarkko@...nel.org>,
"chenhuacai@...nel.org" <chenhuacai@...nel.org>, Kai Huang <kai.huang@...el.com>,
"shuah@...nel.org" <shuah@...nel.org>, "dwmw@...zon.co.uk" <dwmw@...zon.co.uk>,
"pankaj.gupta@....com" <pankaj.gupta@....com>, Chao P Peng <chao.p.peng@...el.com>,
"nikunj@....com" <nikunj@....com>, Alexander Graf <graf@...zon.com>,
"viro@...iv.linux.org.uk" <viro@...iv.linux.org.uk>, "pbonzini@...hat.com" <pbonzini@...hat.com>,
"yuzenghui@...wei.com" <yuzenghui@...wei.com>, "jroedel@...e.de" <jroedel@...e.de>,
"suzuki.poulose@....com" <suzuki.poulose@....com>, "jgowans@...zon.com" <jgowans@...zon.com>,
Yilun Xu <yilun.xu@...el.com>, "liam.merwick@...cle.com" <liam.merwick@...cle.com>,
"michael.roth@....com" <michael.roth@....com>, "quic_tsoni@...cinc.com" <quic_tsoni@...cinc.com>,
"richard.weiyang@...il.com" <richard.weiyang@...il.com>, Ira Weiny <ira.weiny@...el.com>,
"aou@...s.berkeley.edu" <aou@...s.berkeley.edu>, Xiaoyao Li <xiaoyao.li@...el.com>,
"qperret@...gle.com" <qperret@...gle.com>,
"kent.overstreet@...ux.dev" <kent.overstreet@...ux.dev>, "dmatlack@...gle.com" <dmatlack@...gle.com>,
"james.morse@....com" <james.morse@....com>, "brauner@...nel.org" <brauner@...nel.org>,
"ackerleytng@...gle.com" <ackerleytng@...gle.com>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>, "pgonda@...gle.com" <pgonda@...gle.com>,
"quic_pderrin@...cinc.com" <quic_pderrin@...cinc.com>, "roypat@...zon.co.uk" <roypat@...zon.co.uk>,
"linux-mm@...ck.org" <linux-mm@...ck.org>, "hch@...radead.org" <hch@...radead.org>
Subject: Re: [RFC PATCH v2 00/51] 1G page support for guest_memfd
On Fri, May 16, 2025, Rick P Edgecombe wrote:
> On Fri, 2025-05-16 at 06:11 -0700, Vishal Annapurve wrote:
> > Google internally uses 1G hugetlb pages to achieve high bandwidth IO,
> > lower memory footprint using HVO and lower MMU/IOMMU page table memory
> > footprint among other improvements. These percentages carry a
> > substantial impact when working at the scale of large fleets of hosts
> > each carrying significant memory capacity.
>
> There must have been a lot of measuring involved in that. But the numbers I was
> hoping for were how much does *this* series help upstream.
...
> I asked this question assuming there were some measurements for the 1GB part of
> this series. It sounds like the reasoning is instead that this is how Google
> does things, which is backed by way more benchmarking than kernel patches are
> used to getting. So it can just be reasonable assumed to be helpful.
>
> But for upstream code, I'd expect there to be a bit more concrete than "we
> believe" and "substantial impact". It seems like I'm in the minority here
> though. So if no one else wants to pressure test the thinking in the usual way,
> I guess I'll just have to wonder.
>From my perspective, 1GiB hugepage support in guest_memfd isn't about improving
CoCo performance, it's about achieving feature parity on guest_memfd with respect
to existing backing stores so that it's possible to use guest_memfd to back all
VM shapes in a fleet.
Let's assume there is significant value in backing non-CoCo VMs with 1GiB pages,
unless you want to re-litigate the existence of 1GiB support in HugeTLBFS.
If we assume 1GiB support is mandatory for non-CoCo VMs, then it becomes mandatory
for CoCo VMs as well, because it's the only realistic way to run CoCo VMs and
non-CoCo VMs on a single host. Mixing 1GiB HugeTLBFS with any other backing store
for VMs simply isn't tenable due to the nature of 1GiB allocations. E.g. grabbing
sub-1GiB chunks of memory for CoCo VMs quickly fragments memory to the point where
HugeTLBFS can't allocate memory for non-CoCo VMs.
Teaching HugeTLBFS to play nice with TDX and SNP isn't happening, which leaves
adding 1GiB support to guest_memfd as the only way forward.
Any boost to TDX (or SNP) performance is purely a bonus.
Powered by blists - more mailing lists