[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aBrIkdnpmKujtVxf@yzhao56-desk.sh.intel.com>
Date: Wed, 7 May 2025 10:42:25 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: Vishal Annapurve <vannapurve@...gle.com>
CC: "Huang, Kai" <kai.huang@...el.com>, "kirill.shutemov@...ux.intel.com"
<kirill.shutemov@...ux.intel.com>, "pbonzini@...hat.com"
<pbonzini@...hat.com>, "seanjc@...gle.com" <seanjc@...gle.com>, "Edgecombe,
Rick P" <rick.p.edgecombe@...el.com>, "bp@...en8.de" <bp@...en8.de>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>, "x86@...nel.org"
<x86@...nel.org>, "mingo@...hat.com" <mingo@...hat.com>, "tglx@...utronix.de"
<tglx@...utronix.de>, "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-coco@...ts.linux.dev" <linux-coco@...ts.linux.dev>, "Yamahata, Isaku"
<isaku.yamahata@...el.com>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>
Subject: Re: [RFC, PATCH 05/12] KVM: TDX: Add tdx_pamt_get()/put() helpers
On Tue, May 06, 2025 at 06:15:40PM -0700, Vishal Annapurve wrote:
> On Tue, May 6, 2025 at 6:04 PM Yan Zhao <yan.y.zhao@...el.com> wrote:
> >
> > On Mon, May 05, 2025 at 08:44:26PM +0800, Huang, Kai wrote:
> > > On Fri, 2025-05-02 at 16:08 +0300, Kirill A. Shutemov wrote:
> > > > +static int tdx_pamt_add(atomic_t *pamt_refcount, unsigned long hpa,
> > > > + struct list_head *pamt_pages)
> > > > +{
> > > > + u64 err;
> > > > +
> > > > + hpa = ALIGN_DOWN(hpa, SZ_2M);
> > > > +
> > > > + spin_lock(&pamt_lock);
> > >
> > > Just curious, Can the lock be per-2M-range?
> > Me too.
> > Could we introduce smaller locks each covering a 2M range?
> >
> > And could we deposit 2 pamt pages per-2M hpa range no matter if it's finally
> > mapped as a huge page or not?
> >
>
> Are you suggesting to keep 2 PAMT pages allocated for each private 2M
> page even if it's mapped as a hugepage? It will lead to wastage of
> memory of 4 MB per 1GB of guest memory range. For large VM sizes that
> will amount to high values.
Ok. I'm thinking of the possibility to aligning the time of PAMT page allocation
to that of physical page allocation.
Powered by blists - more mailing lists