[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190614131453.ludfm4ufzqwa326k@box>
Date: Fri, 14 Jun 2019 16:14:53 +0300
From: "Kirill A. Shutemov" <kirill@...temov.name>
To: Peter Zijlstra <peterz@...radead.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>, x86@...nel.org,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, Borislav Petkov <bp@...en8.de>,
Andy Lutomirski <luto@...capital.net>,
David Howells <dhowells@...hat.com>,
Kees Cook <keescook@...omium.org>,
Dave Hansen <dave.hansen@...el.com>,
Kai Huang <kai.huang@...ux.intel.com>,
Jacob Pan <jacob.jun.pan@...ux.intel.com>,
Alison Schofield <alison.schofield@...el.com>,
linux-mm@...ck.org, kvm@...r.kernel.org, keyrings@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH, RFC 13/62] x86/mm: Add hooks to allocate and free
encrypted pages
On Fri, Jun 14, 2019 at 11:34:09AM +0200, Peter Zijlstra wrote:
> On Wed, May 08, 2019 at 05:43:33PM +0300, Kirill A. Shutemov wrote:
>
> > +/* Prepare page to be used for encryption. Called from page allocator. */
> > +void __prep_encrypted_page(struct page *page, int order, int keyid, bool zero)
> > +{
> > + int i;
> > +
> > + /*
> > + * The hardware/CPU does not enforce coherency between mappings
> > + * of the same physical page with different KeyIDs or
> > + * encryption keys. We are responsible for cache management.
> > + */
>
> On alloc we should flush the unencrypted (key=0) range, while on free
> (below) we should flush the encrypted (key!=0) range.
>
> But I seem to have missed where page_address() does the right thing
> here.
As you've seen by now, it will be addressed later in the patchset. I'll
update the changelog to indicate that page_address() handles KeyIDs
correctly.
> > + clflush_cache_range(page_address(page), PAGE_SIZE * (1UL << order));
> > +
> > + for (i = 0; i < (1 << order); i++) {
> > + /* All pages coming out of the allocator should have KeyID 0 */
> > + WARN_ON_ONCE(lookup_page_ext(page)->keyid);
> > + lookup_page_ext(page)->keyid = keyid;
> > +
>
> So presumably page_address() is affected by this keyid, and the below
> clear_highpage() then accesses the 'right' location?
Yes. clear_highpage() -> kmap_atomic() -> page_address().
> > + /* Clear the page after the KeyID is set. */
> > + if (zero)
> > + clear_highpage(page);
> > +
> > + page++;
> > + }
> > +}
> > +
> > +/*
> > + * Handles freeing of encrypted page.
> > + * Called from page allocator on freeing encrypted page.
> > + */
> > +void free_encrypted_page(struct page *page, int order)
> > +{
> > + int i;
> > +
> > + /*
> > + * The hardware/CPU does not enforce coherency between mappings
> > + * of the same physical page with different KeyIDs or
> > + * encryption keys. We are responsible for cache management.
> > + */
>
> I still don't like that comment much; yes the hardware doesn't do it,
> and yes we have to do it, but it doesn't explain the actual scheme
> employed to do so.
Fair enough. I'll do better.
> > + clflush_cache_range(page_address(page), PAGE_SIZE * (1UL << order));
> > +
> > + for (i = 0; i < (1 << order); i++) {
> > + /* Check if the page has reasonable KeyID */
> > + WARN_ON_ONCE(lookup_page_ext(page)->keyid > mktme_nr_keyids);
>
> It should also check keyid > 0, so maybe:
>
> (unsigned)(keyid - 1) > keyids-1
>
> instead?
Makes sense.
--
Kirill A. Shutemov
Powered by blists - more mailing lists