[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <SN6PR12MB2767C4C296281D25306885A78E7D9@SN6PR12MB2767.namprd12.prod.outlook.com>
Date: Sat, 3 Sep 2022 17:30:28 +0000
From: "Kalra, Ashish" <Ashish.Kalra@....com>
To: Boris Petkov <bp@...en8.de>
CC: "x86@...nel.org" <x86@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"rientjes@...gle.com" <rientjes@...gle.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-coco@...ts.linux.dev" <linux-coco@...ts.linux.dev>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"slp@...hat.com" <slp@...hat.com>,
"peterz@...radead.org" <peterz@...radead.org>,
"linux-crypto@...r.kernel.org" <linux-crypto@...r.kernel.org>,
"mingo@...hat.com" <mingo@...hat.com>,
"hpa@...or.com" <hpa@...or.com>,
"ak@...ux.intel.com" <ak@...ux.intel.com>,
"Lendacky, Thomas" <Thomas.Lendacky@....com>,
"alpergun@...gle.com" <alpergun@...gle.com>,
"jroedel@...e.de" <jroedel@...e.de>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
"seanjc@...gle.com" <seanjc@...gle.com>,
"pgonda@...gle.com" <pgonda@...gle.com>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"srinivas.pandruvada@...ux.intel.com"
<srinivas.pandruvada@...ux.intel.com>,
"ardb@...nel.org" <ardb@...nel.org>,
"dovmurik@...ux.ibm.com" <dovmurik@...ux.ibm.com>,
"tobin@....com" <tobin@....com>,
"Roth, Michael" <Michael.Roth@....com>,
"jmattson@...gle.com" <jmattson@...gle.com>,
"kirill@...temov.name" <kirill@...temov.name>,
"vkuznets@...hat.com" <vkuznets@...hat.com>,
"tony.luck@...el.com" <tony.luck@...el.com>,
"vbabka@...e.cz" <vbabka@...e.cz>,
"sathyanarayanan.kuppuswamy@...ux.intel.com"
<sathyanarayanan.kuppuswamy@...ux.intel.com>,
"luto@...nel.org" <luto@...nel.org>,
"dgilbert@...hat.com" <dgilbert@...hat.com>,
"marcorr@...gle.com" <marcorr@...gle.com>,
"jarkko@...nel.org" <jarkko@...nel.org>
Subject: RE: [PATCH Part2 v6 09/49] x86/fault: Add support to handle the RMP
fault for user address
[AMD Official Use Only - General]
Hello Boris,
>>So essentially we want to map the faulting address to a RMP entry,
>>considering the fact that a 2M host hugepage can be mapped as 4K RMP table entries and 1G host hugepage can be mapped as 2M RMP table entries.
>So something's seriously confusing or missing here because if you fault on a 2M host page and the underlying RMP entries are 4K then you can use pte_index().
>If the host page is 1G and the underlying RMP entries are 2M, pmd_index() should work here too.
>But this piecemeal back'n'forth doesn't seem to resolve this so I'd like to ask you pls to sit down, take your time and give a detailed example of the two possible cases and what the difference is between pte_/pmd_index and your way. Feel free to >add actual debug output and paste it here.
There is 1 64-bit RMP entry for every physical 4k page of DRAM, so essentially every 4K page of DRAM is represented by a RMP entry.
So even if host page is 1G and underlying (smashed/split) RMP entries are 2M, the RMP table entry has to be indexed to a 4K entry
corresponding to that.
If it was simply a 2M entry in the RMP table, then pmd_index() will work correctly.
Considering the following example:
address = 0x40200000;
level = PG_LEVEL_1G;
pfn = 0x40000;
pfn |= pmd_index(address);
This will give the RMP table index as 0x40001.
And it will work if the RMP table entry was simply a 2MB entry, but we need to map this further to its corresponding 4K entry.
With the same example as above:
level = PG_LEVEL_1G;
mask = pages_per_hpage(level) - pages_per_hpage(level - 1);
pfn |= (address >> PAGE_SHIFT) & mask;
This will give the RMP table index as 0x40200.
Which is the correct RMP table entry for a 2MB smashed/split 1G page mapped further to its corresponding 4K entry.
Hopefully this clarifies why pmd_index() can't be used here.
Thanks,
Ashish
Powered by blists - more mailing lists