[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <SA1PR21MB133596B911C6A45142B83B52BF0C9@SA1PR21MB1335.namprd21.prod.outlook.com>
Date: Wed, 23 Nov 2022 03:27:25 +0000
From: Dexuan Cui <decui@...rosoft.com>
To: "Kirill A. Shutemov" <kirill@...temov.name>
CC: "ak@...ux.intel.com" <ak@...ux.intel.com>,
"arnd@...db.de" <arnd@...db.de>, "bp@...en8.de" <bp@...en8.de>,
"brijesh.singh@....com" <brijesh.singh@....com>,
"Williams, Dan J" <dan.j.williams@...el.com>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
"hpa@...or.com" <hpa@...or.com>,
"jane.chu@...cle.com" <jane.chu@...cle.com>,
"kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>,
KY Srinivasan <kys@...rosoft.com>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"luto@...nel.org" <luto@...nel.org>,
"mingo@...hat.com" <mingo@...hat.com>,
"peterz@...radead.org" <peterz@...radead.org>,
"rostedt@...dmis.org" <rostedt@...dmis.org>,
"sathyanarayanan.kuppuswamy@...ux.intel.com"
<sathyanarayanan.kuppuswamy@...ux.intel.com>,
"seanjc@...gle.com" <seanjc@...gle.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"tony.luck@...el.com" <tony.luck@...el.com>,
"wei.liu@...nel.org" <wei.liu@...nel.org>,
"x86@...nel.org" <x86@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH 2/6] x86/tdx: Retry TDVMCALL_MAP_GPA() when needed
> From: Kirill A. Shutemov <kirill@...temov.name>
> Sent: Monday, November 21, 2022 4:01 PM
> [...]
> On Mon, Nov 21, 2022 at 11:51:47AM -0800, Dexuan Cui wrote:
> [...]
> I'm not convinced it deserves a separate helper for one user.
> Does it look that ugly if tdx_map_gpa() uses __tdx_hypercall() directly?
Will use __tdx_hypercall() directly.
> > /* Called from __tdx_hypercall() for unrecoverable failure */
> > void __tdx_hypercall_failed(void)
> > {
> > @@ -691,6 +712,43 @@ static bool try_accept_one(phys_addr_t *start,
> unsigned long len,
> > return true;
> > }
> >
> > +/*
> > + * Notify the VMM about page mapping conversion. More info about ABI
> > + * can be found in TDX Guest-Host-Communication Interface (GHCI),
> > + * section "TDG.VP.VMCALL<MapGPA>"
> > + */
> > +static bool tdx_map_gpa(phys_addr_t start, phys_addr_t end, bool enc)
> > +{
> > + u64 ret, r11;
> > +
> > + while (1) {
>
> Endless? Maybe an upper limit if no progress?
I'll add a max count of 1000, which should be far more than enough.
> > + ret = _tdx_hypercall_output_r11(TDVMCALL_MAP_GPA, start,
> > + end - start, 0, 0, &r11);
> > + if (!ret)
> > + break;
> > +
> > + if (ret != TDVMCALL_STATUS_RETRY)
> > + break;
> > +
> > + /*
> > + * The guest must retry the operation for the pages in the
> > + * region starting at the GPA specified in R11. Make sure R11
> > + * contains a sane value.
> > + */
> > + if ((r11 & ~cc_mkdec(0)) < (start & ~cc_mkdec(0)) ||
> > + (r11 & ~cc_mkdec(0)) >= (end & ~cc_mkdec(0)))
> > + return false;
>
> Emm. All of them suppose to have shared bit set, why not compare directly
> without cc_mkdec() dance?
The above code is unnecessary and will be removed.
So, I'll use the below in v2:
/*
* Notify the VMM about page mapping conversion. More info about ABI
* can be found in TDX Guest-Host-Communication Interface (GHCI),
* section "TDG.VP.VMCALL<MapGPA>"
*/
static bool tdx_map_gpa(phys_addr_t start, phys_addr_t end, bool enc)
{
int max_retry_cnt = 1000, retry_cnt = 0;
struct tdx_hypercall_args args;
u64 map_fail_paddr, ret;
while (1) {
args.r10 = TDX_HYPERCALL_STANDARD;
args.r11 = TDVMCALL_MAP_GPA;
args.r12 = start;
args.r13 = end - start;
args.r14 = 0;
args.r15 = 0;
ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT);
if (!ret)
break;
if (ret != TDVMCALL_STATUS_RETRY)
break;
/*
* The guest must retry the operation for the pages in the
* region starting at the GPA specified in R11. Make sure R11
* contains a sane value.
*/
map_fail_paddr = args.r11 ;
if (map_fail_paddr < start || map_fail_paddr >= end)
return false;
if (map_fail_paddr == start) {
retry_cnt++;
if (retry_cnt > max_retry_cnt)
return false;
} else {
retry_cnt = 0;;
start = map_fail_paddr;
}
}
return !ret;
}
Powered by blists - more mailing lists