[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAA03e5FgiLoixmqpKtfNOXM_0P5Y7LQzr3_oQe+2Z=GJ6kw32g@mail.gmail.com>
Date: Tue, 6 Sep 2022 06:54:30 -0700
From: Marc Orr <marcorr@...gle.com>
To: Jarkko Sakkinen <jarkko@...nel.org>
Cc: Borislav Petkov <bp@...en8.de>,
Ashish Kalra <Ashish.Kalra@....com>, x86 <x86@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
kvm list <kvm@...r.kernel.org>, linux-coco@...ts.linux.dev,
Linux Memory Management List <linux-mm@...ck.org>,
Linux Crypto Mailing List <linux-crypto@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Joerg Roedel <jroedel@...e.de>,
"Lendacky, Thomas" <thomas.lendacky@....com>,
"H. Peter Anvin" <hpa@...or.com>, Ard Biesheuvel <ardb@...nel.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Jim Mattson <jmattson@...gle.com>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Sergio Lopez <slp@...hat.com>, Peter Gonda <pgonda@...gle.com>,
Peter Zijlstra <peterz@...radead.org>,
Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
David Rientjes <rientjes@...gle.com>,
Dov Murik <dovmurik@...ux.ibm.com>,
Tobin Feldman-Fitzthum <tobin@....com>,
"Roth, Michael" <michael.roth@....com>,
Vlastimil Babka <vbabka@...e.cz>,
"Kirill A . Shutemov" <kirill@...temov.name>,
Andi Kleen <ak@...ux.intel.com>,
Tony Luck <tony.luck@...el.com>,
Sathyanarayanan Kuppuswamy
<sathyanarayanan.kuppuswamy@...ux.intel.com>,
Alper Gun <alpergun@...gle.com>,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>
Subject: Re: [PATCH Part2 v6 09/49] x86/fault: Add support to handle the RMP
fault for user address
On Tue, Sep 6, 2022 at 3:25 AM Jarkko Sakkinen <jarkko@...nel.org> wrote:
>
> On Tue, Aug 09, 2022 at 06:55:43PM +0200, Borislav Petkov wrote:
> > On Mon, Jun 20, 2022 at 11:03:43PM +0000, Ashish Kalra wrote:
> > > + pfn = pte_pfn(*pte);
> > > +
> > > + /* If its large page then calculte the fault pfn */
> > > + if (level > PG_LEVEL_4K) {
> > > + unsigned long mask;
> > > +
> > > + mask = pages_per_hpage(level) - pages_per_hpage(level - 1);
> > > + pfn |= (address >> PAGE_SHIFT) & mask;
> >
> > Oh boy, this is unnecessarily complicated. Isn't this
> >
> > pfn |= pud_index(address);
> >
> > or
> > pfn |= pmd_index(address);
>
> I played with this a bit and ended up with
>
> pfn = pte_pfn(*pte) | PFN_DOWN(address & page_level_mask(level - 1));
>
> Unless I got something terribly wrong, this should do the
> same (see the attached patch) as the existing calculations.
Actually, I don't think they're the same. I think Jarkko's version is
correct. Specifically:
- For level = PG_LEVEL_2M they're the same.
- For level = PG_LEVEL_1G:
The current code calculates a garbage mask:
mask = pages_per_hpage(level) - pages_per_hpage(level - 1);
translates to:
>>> hex(262144 - 512)
'0x3fe00'
But I believe Jarkko's version calculates the correct mask (below),
incorporating all 18 offset bits into the 1G page.
>>> hex(262144 -1)
'0x3ffff'
Powered by blists - more mailing lists