[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aBwSICZBrgiKX2UF@yzhao56-desk.sh.intel.com>
Date: Thu, 8 May 2025 10:08:32 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: Dave Hansen <dave.hansen@...el.com>
CC: "Huang, Kai" <kai.huang@...el.com>, "kirill.shutemov@...ux.intel.com"
<kirill.shutemov@...ux.intel.com>, "pbonzini@...hat.com"
<pbonzini@...hat.com>, "seanjc@...gle.com" <seanjc@...gle.com>, "Edgecombe,
Rick P" <rick.p.edgecombe@...el.com>, "bp@...en8.de" <bp@...en8.de>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>, "x86@...nel.org"
<x86@...nel.org>, "mingo@...hat.com" <mingo@...hat.com>, "tglx@...utronix.de"
<tglx@...utronix.de>, "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-coco@...ts.linux.dev" <linux-coco@...ts.linux.dev>, "Yamahata, Isaku"
<isaku.yamahata@...el.com>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>
Subject: Re: [RFC, PATCH 05/12] KVM: TDX: Add tdx_pamt_get()/put() helpers
On Wed, May 07, 2025 at 09:31:22AM -0700, Dave Hansen wrote:
> On 5/5/25 05:44, Huang, Kai wrote:
> >> +static int tdx_pamt_add(atomic_t *pamt_refcount, unsigned long hpa,
> >> + struct list_head *pamt_pages)
> >> +{
> >> + u64 err;
> >> +
> >> + hpa = ALIGN_DOWN(hpa, SZ_2M);
> >> +
> >> + spin_lock(&pamt_lock);
> > Just curious, Can the lock be per-2M-range?
>
> Folks, please keep it simple.
>
> If there's lock contention on this, we'll fix the lock contention, or
> hash the physical address into a fixed number of locks. But having it be
> per-2M-range sounds awful. Then you have to size it, and allocate it and
> then resize it if there's ever hotplug, etc...
In patch 2, there're per-2M-range pamt_refcounts. Could the per-2M-range
lock be implemented in a similar way?
+static atomic_t *pamt_refcounts;
+atomic_t *tdx_get_pamt_refcount(unsigned long hpa)
+{
+ return &pamt_refcounts[hpa / PMD_SIZE];
+}
> Kirill, could you put together some kind of torture test for this,
> please? I would imagine a workload which is sitting in a loop setting up
> and tearing down VMs on a bunch of CPUs would do it.
>
> That ^ would be the worst possible case, I think. If you don't see lock
> contention there, you'll hopefully never see it on real systems.
When one vCPU is trying to install a guest page of HPA A, while another vCPU
is trying to install a guest page of HPA B, theoretically they may content the
global pamt_lock even if HPA A and B belong to different PAMT 2M blocks.
> I *suspect* that real systems will get bottlenecked somewhere in the
> page conversion process rather than on this lock. But it should be a
> pretty simple experiment to run.
Powered by blists - more mailing lists