[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180618101828.hxp2dw3fmfxwk2ka@black.fi.intel.com>
Date: Mon, 18 Jun 2018 13:18:28 +0300
From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
To: Dave Hansen <dave.hansen@...el.com>
Cc: Ingo Molnar <mingo@...hat.com>, x86@...nel.org,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>,
Tom Lendacky <thomas.lendacky@....com>,
Kai Huang <kai.huang@...ux.intel.com>,
Jacob Pan <jacob.jun.pan@...ux.intel.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCHv3 10/17] x86/mm: Implement prep_encrypted_page() and
arch_free_page()
On Wed, Jun 13, 2018 at 06:26:10PM +0000, Dave Hansen wrote:
> On 06/12/2018 07:39 AM, Kirill A. Shutemov wrote:
> > prep_encrypted_page() also takes care about zeroing the page. We have to
> > do this after KeyID is set for the page.
>
> This is an implementation detail that has gone unmentioned until now but
> has impacted at least half a dozen locations in previous patches. Can
> you rectify that, please?
It was mentioned in commit message of 04/17.
> > +void prep_encrypted_page(struct page *page, int order, int keyid, bool zero)
> > +{
> > + int i;
> > +
> > + /*
> > + * The hardware/CPU does not enforce coherency between mappings of the
> > + * same physical page with different KeyIDs or encrypt ion keys.
>
> What are "encrypt ion"s? :)
:P
> > + * We are responsible for cache management.
> > + *
> > + * We flush cache before allocating encrypted page
> > + */
> > + clflush_cache_range(page_address(page), PAGE_SIZE << order);
> > +
> > + for (i = 0; i < (1 << order); i++) {
> > + WARN_ON_ONCE(lookup_page_ext(page)->keyid);
>
> /* All pages coming out of the allocator should have KeyID 0 */
>
Okay.
> > + lookup_page_ext(page)->keyid = keyid;
> > + /* Clear the page after the KeyID is set. */
> > + if (zero)
> > + clear_highpage(page);
> > + }
> > +}
>
> How expensive is this?
It just shifts cost of zeroing from page allocator here. It should not
have huge effect.
> > +void arch_free_page(struct page *page, int order)
> > +{
> > + int i;
> >
>
> /* KeyId-0 pages were not used for MKTME and need no work */
>
> ... or something
Okay.
> > + if (!page_keyid(page))
> > + return;
>
> Is page_keyid() optimized so that all this goes away automatically when
> MKTME is compiled out or unsupported?
If MKTME is not enabled compile-time, this translation unit doesn't
compile at all.
I have not yet optimized for run-time unsupported case. I'll optimized it
based on performance measurements.
> > + for (i = 0; i < (1 << order); i++) {
> > + WARN_ON_ONCE(lookup_page_ext(page)->keyid > mktme_nr_keyids);
> > + lookup_page_ext(page)->keyid = 0;
> > + }
> > +
> > + clflush_cache_range(page_address(page), PAGE_SIZE << order);
> > +}
>
>
>
>
--
Kirill A. Shutemov
Powered by blists - more mailing lists