[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180305162610.37510-20-kirill.shutemov@linux.intel.com>
Date: Mon, 5 Mar 2018 19:26:07 +0300
From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
To: Ingo Molnar <mingo@...hat.com>, x86@...nel.org,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>,
Tom Lendacky <thomas.lendacky@....com>
Cc: Dave Hansen <dave.hansen@...el.com>,
Kai Huang <kai.huang@...ux.intel.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: [RFC, PATCH 19/22] x86/mm: Implement free_encrypt_page()
As on allocation of encrypted page, we need to flush cache before
returning page to free pool. Failing to do this may lead to data
corruption.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
---
arch/x86/mm/mktme.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/arch/x86/mm/mktme.c b/arch/x86/mm/mktme.c
index 1129ad25b22a..ef0eb1eb8d6e 100644
--- a/arch/x86/mm/mktme.c
+++ b/arch/x86/mm/mktme.c
@@ -45,6 +45,19 @@ void prep_encrypt_page(struct page *page, gfp_t gfp, unsigned int order)
WARN_ONCE(gfp & __GFP_ZERO, "__GFP_ZERO is useless for encrypted pages");
}
+void free_encrypt_page(struct page *page, int keyid, unsigned int order)
+{
+ int i;
+ void *v;
+
+ for (i = 0; i < (1 << order); i++) {
+ v = kmap_atomic_keyid(page, keyid + i);
+ /* See comment in prep_encrypt_page() */
+ clflush_cache_range(v, PAGE_SIZE);
+ kunmap_atomic(v);
+ }
+}
+
struct page *__alloc_zeroed_encrypted_user_highpage(gfp_t gfp,
struct vm_area_struct *vma, unsigned long vaddr)
{
--
2.16.1
Powered by blists - more mailing lists