[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a7f8c807df3bbe1923f21e30817b23e785776260.camel@intel.com>
Date: Wed, 11 Jan 2023 10:57:57 +0000
From: "Huang, Kai" <kai.huang@...el.com>
To: "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"Hansen, Dave" <dave.hansen@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
CC: "Luck, Tony" <tony.luck@...el.com>,
"bagasdotme@...il.com" <bagasdotme@...il.com>,
"ak@...ux.intel.com" <ak@...ux.intel.com>,
"Wysocki, Rafael J" <rafael.j.wysocki@...el.com>,
"kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>,
"Christopherson,, Sean" <seanjc@...gle.com>,
"Chatre, Reinette" <reinette.chatre@...el.com>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"Yamahata, Isaku" <isaku.yamahata@...el.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"peterz@...radead.org" <peterz@...radead.org>,
"imammedo@...hat.com" <imammedo@...hat.com>,
"Gao, Chao" <chao.gao@...el.com>,
"Brown, Len" <len.brown@...el.com>,
"Shahar, Sagi" <sagis@...gle.com>,
"sathyanarayanan.kuppuswamy@...ux.intel.com"
<sathyanarayanan.kuppuswamy@...ux.intel.com>,
"Huang, Ying" <ying.huang@...el.com>,
"Williams, Dan J" <dan.j.williams@...el.com>
Subject: Re: [PATCH v8 11/16] x86/virt/tdx: Designate reserved areas for all
TDMRs
On Tue, 2023-01-10 at 07:19 -0800, Dave Hansen wrote:
> On 1/10/23 03:01, Huang, Kai wrote:
> > On Mon, 2023-01-09 at 17:22 -0800, Dave Hansen wrote:
> > > On 1/9/23 17:19, Huang, Kai wrote:
> > > > > It's probably also worth noting *somewhere* that there's a balance to be
> > > > > had between TDMRs and reserved areas. A system that is running out of
> > > > > reserved areas in a TDMR could split a TDMR to get more reserved areas.
> > > > > A system that has run out of TDMRs could relatively easily coalesce two
> > > > > adjacent TDMRs (before the PAMTs are allocated) and use a reserved area
> > > > > if there was a gap between them.
> > > > We can add above to the changelog of this patch, or the patch 09 ("x86/virt/tdx:
> > > > Fill out TDMRs to cover all TDX memory regions"). The latter perhaps is better
> > > > since that patch is the first place where the balance of TDMRs and reserved
> > > > areas is related.
> > > >
> > > > What is your suggestion?
> > > Just put it close to the code that actually hits the problem so the
> > > potential solution is close at hand to whoever hits the problem.
> > >
> > Sorry to double check: the code which hits the problem is the 'if (idx >=
> > max_reserved_per_tdmr)' check in tdmr_add_rsvd_area(), so I think I can add
> > right before this check?
>
> Please just hack together how you think it should look and either reply
> with an updated patch, or paste the relevant code snippet in your reply.
> That'll keep me from having to go chase this code back down.
>
Thanks for the tip. How about below?
static int tdmr_add_rsvd_area(struct tdmr_info *tdmr, int *p_idx, u64 addr,
u64 size, u16 max_reserved_per_tdmr)
{
struct tdmr_reserved_area *rsvd_areas = tdmr->reserved_areas;
int idx = *p_idx;
/* Reserved area must be 4K aligned in offset and size */
if (WARN_ON(addr & ~PAGE_MASK || size & ~PAGE_MASK))
return -EINVAL;
/*
* The TDX module supports only limited number of TDMRs and
* limited number of reserved areas for each TDMR. There's a
* balance to be had between TDMRs and reserved areas. A system
* that is running out of reserved areas in a TDMR could split a
* TDMR to get more reserved areas. A system that has run out
* of TDMRs could relatively easily coalesce two adjacent TDMRs
* (before the PAMTs are allocated) and use a reserved area if
* there was a gap between them.
*/
if (idx >= max_reserved_per_tdmr) {
pr_warn("too many reserved areas for TDMR [0x%llx, 0x%llx)\n",
tdmr->base, tdmr_end(tdmr));
return -ENOSPC;
}
/*
* Consume one reserved area per call. Make no effort to
* optimize or reduce the number of reserved areas which are
* consumed by contiguous reserved areas, for instance.
*/
rsvd_areas[idx].offset = addr - tdmr->base;
rsvd_areas[idx].size = size;
*p_idx = idx + 1;
return 0;
}
Powered by blists - more mailing lists