[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <13096e4e39801806270c3a6a641102a8151aa5fc.camel@intel.com>
Date: Wed, 11 Jan 2023 10:00:07 +0000
From: "Huang, Kai" <kai.huang@...el.com>
To: "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"Hansen, Dave" <dave.hansen@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
CC: "Luck, Tony" <tony.luck@...el.com>,
"bagasdotme@...il.com" <bagasdotme@...il.com>,
"ak@...ux.intel.com" <ak@...ux.intel.com>,
"Wysocki, Rafael J" <rafael.j.wysocki@...el.com>,
"kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>,
"Christopherson,, Sean" <seanjc@...gle.com>,
"Chatre, Reinette" <reinette.chatre@...el.com>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"Yamahata, Isaku" <isaku.yamahata@...el.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"peterz@...radead.org" <peterz@...radead.org>,
"imammedo@...hat.com" <imammedo@...hat.com>,
"Gao, Chao" <chao.gao@...el.com>,
"Brown, Len" <len.brown@...el.com>,
"Shahar, Sagi" <sagis@...gle.com>,
"sathyanarayanan.kuppuswamy@...ux.intel.com"
<sathyanarayanan.kuppuswamy@...ux.intel.com>,
"Huang, Ying" <ying.huang@...el.com>,
"Williams, Dan J" <dan.j.williams@...el.com>
Subject: Re: [PATCH v8 07/16] x86/virt/tdx: Use all system memory when
initializing TDX module as TDX memory
On Tue, 2023-01-10 at 08:18 -0800, Hansen, Dave wrote:
> On 1/10/23 04:09, Huang, Kai wrote:
> > On Mon, 2023-01-09 at 08:51 -0800, Dave Hansen wrote:
> > > On 1/9/23 03:48, Huang, Kai wrote:
> > > > > > > > This can also be enhanced in the future, i.e. by allowing adding non-TDX
> > > > > > > > memory to a separate NUMA node. In this case, the "TDX-capable" nodes
> > > > > > > > and the "non-TDX-capable" nodes can co-exist, but the kernel/userspace
> > > > > > > > needs to guarantee memory pages for TDX guests are always allocated from
> > > > > > > > the "TDX-capable" nodes.
> > > > > >
> > > > > > Why does it need to be enhanced? What's the problem?
> > > >
> > > > The problem is after TDX module initialization, no more memory can be hot-added
> > > > to the page allocator.
> > > >
> > > > Kirill suggested this may not be ideal. With the existing NUMA ABIs we can
> > > > actually have both TDX-capable and non-TDX-capable NUMA nodes online. We can
> > > > bind TDX workloads to TDX-capable nodes while other non-TDX workloads can
> > > > utilize all memory.
> > > >
> > > > But probably it is not necessarily to call out in the changelog?
> > >
> > > Let's say that we add this TDX-compatible-node ABI in the future. What
> > > will old code do that doesn't know about this ABI?
> >
> > Right. The old app will break w/o knowing the new ABI. One resolution, I
> > think, is we don't introduce new userspace ABI, but hide "TDX-capable" and "non-
> > TDX-capable" nodes in the kernel, and let kernel to enforce always allocating
> > TDX guest memory from those "TDX-capable" nodes.
>
> That doesn't actually hide all of the behavior from users. Let's say
> they do:
>
> numactl --membind=6 qemu-kvm ...
>
> In other words, take all of this guest's memory and put it on node 6.
> There lots of free memory on node 6 which is TDX-*IN*compatible. Then,
> they make it a TDX guest:
>
> numactl --membind=6 qemu-kvm -tdx ...
>
> What happens? Does the kernel silently ignore the --membind=6? Or does
> it return -ENOMEM somewhere and confuse the user who has *LOTS* of free
> memory on node 6.
>
> In other words, I don't think the kernel can just enforce this
> internally and hide it from userspace.
IIUC, the kernel, for instance KVM who has knowledge the 'task_struct' is a TDX
guest, can manually AND "TDX-capable" node masks to task's mempolicy, so that
the memory will always be allocated from those "TDX-capable" nodes. KVM can
refuse to create the TDX guest if it found task's mempolicy doesn't have any
"TDX-capable" node, and print out a clear message to the userspace.
But I am new to the core-mm, so I might have some misunderstanding.
>
> > > Is there something fundamental that keeps a memory area that spans two
> > > nodes from being removed and then a new area added that is comprised of
> > > a single node?
> > > Boot time:
> > >
> > > | memblock | memblock |
> > > <--Node=0--> <--Node=1-->
> > >
> > > Funky hotplug... nothing to see here, then:
> > >
> > > <--------Node=2-------->
> >
> > I must have missed something, but how can this happen?
> >
> > I had memory that this cannot happen because the BIOS always allocates address
> > ranges for all NUMA nodes during machine boot. Those address ranges don't
> > necessarily need to have DIMM fully populated but they don't change during
> > machine's runtime.
>
> Is your memory correct? Is there evidence, or requirements in any
> specification to support your memory?
>
I tried to find whether there's any spec mentioning this, but so far didn't find
any. I'll ask around to see whether this case can happen.
At the meantime, I also spent some time looking into the memory hotplug code
more deeply. Below is my thinking:
For TDX system, AFAICT a non-buggy BIOS won't support physically hot-removing
CMR memory (thus no hot-add of CMR memory either). So we are either talking
about hot-adding of non-TDX-usable memory (those are not configured to TDX
module), or kernel soft offline -> (optional remove -> add ->) online any TDX-
usable memory.
For the former we don't need to care about whether the new range can cross
multiple tdx_memlist entries. For the latter, the offline granularity is
'struct memory_block', which is a fixed size after boot IIUC.
And we can only offline one memory_block when it meets: 1) no memory hole, and;
2) all pages are in single zone. IIUC this means it's not possible that we
offline two adjacent contiguous tdx_memlist entries and then online them
together as a single one.
Powered by blists - more mailing lists