[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191122100922.727399494@linuxfoundation.org>
Date: Fri, 22 Nov 2019 11:28:23 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Lianbo Jiang <lijiang@...hat.com>,
Borislav Petkov <bp@...e.de>,
Tom Lendacky <thomas.lendacky@....com>,
kexec@...ts.infradead.org, tglx@...utronix.de, mingo@...hat.com,
hpa@...or.com, akpm@...ux-foundation.org, dan.j.williams@...el.com,
bhelgaas@...gle.com, baiyaowei@...s.chinamobile.com, tiwai@...e.de,
brijesh.singh@....com, dyoung@...hat.com, bhe@...hat.com,
jroedel@...e.de, Sasha Levin <sashal@...nel.org>
Subject: [PATCH 4.19 138/220] kexec: Allocate decrypted control pages for kdump if SME is enabled
From: Lianbo Jiang <lijiang@...hat.com>
[ Upstream commit 9cf38d5559e813cccdba8b44c82cc46ba48d0896 ]
When SME is enabled in the first kernel, it needs to allocate decrypted
pages for kdump because when the kdump kernel boots, these pages need to
be accessed decrypted in the initial boot stage, before SME is enabled.
[ bp: clean up text. ]
Signed-off-by: Lianbo Jiang <lijiang@...hat.com>
Signed-off-by: Borislav Petkov <bp@...e.de>
Reviewed-by: Tom Lendacky <thomas.lendacky@....com>
Cc: kexec@...ts.infradead.org
Cc: tglx@...utronix.de
Cc: mingo@...hat.com
Cc: hpa@...or.com
Cc: akpm@...ux-foundation.org
Cc: dan.j.williams@...el.com
Cc: bhelgaas@...gle.com
Cc: baiyaowei@...s.chinamobile.com
Cc: tiwai@...e.de
Cc: brijesh.singh@....com
Cc: dyoung@...hat.com
Cc: bhe@...hat.com
Cc: jroedel@...e.de
Link: https://lkml.kernel.org/r/20180930031033.22110-3-lijiang@redhat.com
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
kernel/kexec_core.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index f50b90d0d1c28..faeec8255e7e0 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -473,6 +473,10 @@ static struct page *kimage_alloc_crash_control_pages(struct kimage *image,
}
}
+ /* Ensure that these pages are decrypted if SME is enabled. */
+ if (pages)
+ arch_kexec_post_alloc_pages(page_address(pages), 1 << order, 0);
+
return pages;
}
@@ -869,6 +873,7 @@ static int kimage_load_crash_segment(struct kimage *image,
result = -ENOMEM;
goto out;
}
+ arch_kexec_post_alloc_pages(page_address(page), 1, 0);
ptr = kmap(page);
ptr += maddr & ~PAGE_MASK;
mchunk = min_t(size_t, mbytes,
@@ -886,6 +891,7 @@ static int kimage_load_crash_segment(struct kimage *image,
result = copy_from_user(ptr, buf, uchunk);
kexec_flush_icache_page(page);
kunmap(page);
+ arch_kexec_pre_free_pages(page_address(page), 1);
if (result) {
result = -EFAULT;
goto out;
--
2.20.1
Powered by blists - more mailing lists