[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YxmgdMMiTe8HoWqs@kernel.org>
Date: Thu, 8 Sep 2022 10:57:40 +0300
From: Jarkko Sakkinen <jarkko@...nel.org>
To: "Kalra, Ashish" <Ashish.Kalra@....com>
Cc: Marc Orr <marcorr@...gle.com>, Borislav Petkov <bp@...en8.de>,
x86 <x86@...nel.org>, LKML <linux-kernel@...r.kernel.org>,
kvm list <kvm@...r.kernel.org>,
"linux-coco@...ts.linux.dev" <linux-coco@...ts.linux.dev>,
Linux Memory Management List <linux-mm@...ck.org>,
Linux Crypto Mailing List <linux-crypto@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Joerg Roedel <jroedel@...e.de>,
"Lendacky, Thomas" <Thomas.Lendacky@....com>,
"H. Peter Anvin" <hpa@...or.com>, Ard Biesheuvel <ardb@...nel.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Jim Mattson <jmattson@...gle.com>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Sergio Lopez <slp@...hat.com>, Peter Gonda <pgonda@...gle.com>,
Peter Zijlstra <peterz@...radead.org>,
Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
David Rientjes <rientjes@...gle.com>,
Dov Murik <dovmurik@...ux.ibm.com>,
Tobin Feldman-Fitzthum <tobin@....com>,
"Roth, Michael" <Michael.Roth@....com>,
Vlastimil Babka <vbabka@...e.cz>,
"Kirill A . Shutemov" <kirill@...temov.name>,
Andi Kleen <ak@...ux.intel.com>,
Tony Luck <tony.luck@...el.com>,
Sathyanarayanan Kuppuswamy
<sathyanarayanan.kuppuswamy@...ux.intel.com>,
Alper Gun <alpergun@...gle.com>,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>
Subject: Re: [PATCH Part2 v6 09/49] x86/fault: Add support to handle the RMP
fault for user address
On Thu, Sep 08, 2022 at 10:46:51AM +0300, Jarkko Sakkinen wrote:
> On Tue, Sep 06, 2022 at 06:44:23PM +0300, Jarkko Sakkinen wrote:
> > On Tue, Sep 06, 2022 at 02:17:15PM +0000, Kalra, Ashish wrote:
> > > [AMD Official Use Only - General]
> > >
> > > >> On Tue, Aug 09, 2022 at 06:55:43PM +0200, Borislav Petkov wrote:
> > > >> > On Mon, Jun 20, 2022 at 11:03:43PM +0000, Ashish Kalra wrote:
> > > >> > > + pfn = pte_pfn(*pte);
> > > >> > > +
> > > >> > > + /* If its large page then calculte the fault pfn */
> > > >> > > + if (level > PG_LEVEL_4K) {
> > > >> > > + unsigned long mask;
> > > >> > > +
> > > >> > > + mask = pages_per_hpage(level) - pages_per_hpage(level - 1);
> > > >> > > + pfn |= (address >> PAGE_SHIFT) & mask;
> > > >> >
> > > >> > Oh boy, this is unnecessarily complicated. Isn't this
> > > >> >
> > > >> > pfn |= pud_index(address);
> > > >> >
> > > >> > or
> > > >> > pfn |= pmd_index(address);
> > > >>
> > > >> I played with this a bit and ended up with
> > > >>
> > > >> pfn = pte_pfn(*pte) | PFN_DOWN(address & page_level_mask(level
> > > >> - 1));
> > > >>
> > > >> Unless I got something terribly wrong, this should do the same (see
> > > >> the attached patch) as the existing calculations.
> > >
> > > >Actually, I don't think they're the same. I think Jarkko's version is correct. Specifically:
> > > >- For level = PG_LEVEL_2M they're the same.
> > > >- For level = PG_LEVEL_1G:
> > > >The current code calculates a garbage mask:
> > > >mask = pages_per_hpage(level) - pages_per_hpage(level - 1); translates to:
> > > >>> hex(262144 - 512)
> > > >'0x3fe00'
> > >
> > > No actually this is not a garbage mask, as I explained in earlier responses we need to capture the address bits
> > > to get to the correct 4K index into the RMP table.
> > > Therefore, for level = PG_LEVEL_1G:
> > > mask = pages_per_hpage(level) - pages_per_hpage(level - 1) => 0x3fe00 (which is the correct mask).
> > >
> > > >But I believe Jarkko's version calculates the correct mask (below), incorporating all 18 offset bits into the 1G page.
> > > >>> hex(262144 -1)
> > > >'0x3ffff'
> > >
> > > We can get this simply by doing (page_per_hpage(level)-1), but as I mentioned above this is not what we need.
> >
> > I think you're correct, so I'll retry:
> >
> > (address / PAGE_SIZE) & (pages_per_hpage(level) - pages_per_hpage(level - 1)) =
> >
> > (address / PAGE_SIZE) & ((page_level_size(level) / PAGE_SIZE) - (page_level_size(level - 1) / PAGE_SIZE)) =
> >
> > [ factor out 1 / PAGE_SIZE ]
> >
> > (address & (page_level_size(level) - page_level_size(level - 1))) / PAGE_SIZE =
> >
> > [ Substitute with PFN_DOWN() ]
> >
> > PFN_DOWN(address & (page_level_size(level) - page_level_size(level - 1)))
> >
> > So you can just:
> >
> > pfn = pte_pfn(*pte) | PFN_DOWN(address & (page_level_size(level) - page_level_size(level - 1)));
> >
> > Which is IMHO way better still what it is now because no branching
> > and no ad-hoc helpers (the current is essentially just page_level_size
> > wrapper).
>
> I created a small test program:
>
> $ cat test.c
> #include <stdio.h>
> int main(void)
> {
> unsigned long arr[] = {0x8, 0x1000, 0x200000, 0x40000000, 0x8000000000};
> int i;
>
> for (i = 1; i < sizeof(arr)/sizeof(unsigned long); i++) {
> printf("%048b\n", arr[i] - arr[i - 1]);
> printf("%048b\n", (arr[i] - 1) ^ (arr[i - 1] - 1));
> }
> }
>
> kultaheltta in linux on host-snp-v7 [?]
> $ gcc -o test test.c
>
> kultaheltta in linux on host-snp-v7 [?]
> $ ./test
> 000000000000000000000000000000000000111111111000
> 000000000000000000000000000000000000111111111000
> 000000000000000000000000000111111111000000000000
> 000000000000000000000000000111111111000000000000
> 000000000000000000111111111000000000000000000000
> 000000000000000000111111111000000000000000000000
> 000000000000000011000000000000000000000000000000
> 000000000000000011000000000000000000000000000000
>
> So the operation could be described as:
>
> pfn = PFN_DOWN(address & (~page_level_mask(level) ^ ~page_level_mask(level - 1)));
>
> Which IMHO already documents itself quite well: index
> with the granularity of PGD by removing bits used for
> PGD's below it.
I mean:
pfn = pte_pfn(*pte) | PFN_DOWN(address & (~page_level_mask(level) ^ ~page_level_mask(level - 1)));
Note that PG_LEVEL_4K check is unnecessary as the result
will be zero after PFN_DOWN().
BR, Jarkko
Powered by blists - more mailing lists