[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180723095040.w67jp7c7cnxezuwp@kshutemo-mobl1>
Date: Mon, 23 Jul 2018 12:50:40 +0300
From: "Kirill A. Shutemov" <kirill@...temov.name>
To: Dave Hansen <dave.hansen@...el.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Ingo Molnar <mingo@...hat.com>, x86@...nel.org,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>,
Tom Lendacky <thomas.lendacky@....com>,
Kai Huang <kai.huang@...ux.intel.com>,
Jacob Pan <jacob.jun.pan@...ux.intel.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCHv5 12/19] x86/mm: Implement prep_encrypted_page() and
arch_free_page()
On Wed, Jul 18, 2018 at 04:53:27PM -0700, Dave Hansen wrote:
> The description doesn't mention the potential performance implications
> of this patch. That's criminal at this point.
>
> > --- a/arch/x86/mm/mktme.c
> > +++ b/arch/x86/mm/mktme.c
> > @@ -1,4 +1,5 @@
> > #include <linux/mm.h>
> > +#include <linux/highmem.h>
> > #include <asm/mktme.h>
> >
> > phys_addr_t mktme_keyid_mask;
> > @@ -49,3 +50,51 @@ int vma_keyid(struct vm_area_struct *vma)
> > prot = pgprot_val(vma->vm_page_prot);
> > return (prot & mktme_keyid_mask) >> mktme_keyid_shift;
> > }
> > +
> > +void prep_encrypted_page(struct page *page, int order, int keyid, bool zero)
> > +{
> > + int i;
> > +
> > + /* It's not encrypted page: nothing to do */
> > + if (!keyid)
> > + return;
>
> prep_encrypted_page() is called in the fast path in the page allocator.
> This out-of-line copy costs a function call for all users and this is
> also out of the reach of the compiler to understand that keyid!=0 is
> unlikely.
>
> I think this needs to be treated to the inline-in-the-header treatment.
Okay. Again as a macros.
> > + /*
> > + * The hardware/CPU does not enforce coherency between mappings of the
> > + * same physical page with different KeyIDs or encryption keys.
> > + * We are responsible for cache management.
> > + *
> > + * We flush cache before allocating encrypted page
> > + */
> > + clflush_cache_range(page_address(page), PAGE_SIZE << order);
>
> It's also worth pointing out that this must be done on the keyid alias
> direct map, not the normal one.
>
> Wait a sec... How do we know which direct map to use?
page_address() -> lowmem_page_address() -> page_to_virt()
page_to_virt() returns virtual address from the right direct mapping.
> > + for (i = 0; i < (1 << order); i++) {
> > + /* All pages coming out of the allocator should have KeyID 0 */
> > + WARN_ON_ONCE(lookup_page_ext(page)->keyid);
> > + lookup_page_ext(page)->keyid = keyid;
> > +
> > + /* Clear the page after the KeyID is set. */
> > + if (zero)
> > + clear_highpage(page);
> > +
> > + page++;
> > + }
> > +}
> > +
> > +void arch_free_page(struct page *page, int order)
> > +{
> > + int i;
> > +
> > + /* It's not encrypted page: nothing to do */
> > + if (!page_keyid(page))
> > + return;
>
> Ditto on pushing this to a header.
>
> > + clflush_cache_range(page_address(page), PAGE_SIZE << order);
>
> OK, how do we know which copy of the direct map to use, here?
The same way as above.
> > + for (i = 0; i < (1 << order); i++) {
> > + /* Check if the page has reasonable KeyID */
> > + WARN_ON_ONCE(lookup_page_ext(page)->keyid > mktme_nr_keyids);
> > + lookup_page_ext(page)->keyid = 0;
> > + page++;
> > + }
> > +}
> >
>
--
Kirill A. Shutemov
Powered by blists - more mailing lists