[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <170663196394.398.12352089864887604765.tip-bot2@tip-bot2>
Date: Tue, 30 Jan 2024 16:26:03 -0000
From: "tip-bot2 for Ashish Kalra" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Vlastimil Babka <vbabka@...e.cz>, Ashish Kalra <ashish.kalra@....com>,
Michael Roth <michael.roth@....com>, "Borislav Petkov (AMD)" <bp@...en8.de>,
x86@...nel.org, linux-kernel@...r.kernel.org
Subject: [tip: x86/sev] x86/sev: Introduce an SNP leaked pages list
The following commit has been merged into the x86/sev branch of tip:
Commit-ID: 8dac642999b1542e0f0abefba100d8bd11226c83
Gitweb: https://git.kernel.org/tip/8dac642999b1542e0f0abefba100d8bd11226c83
Author: Ashish Kalra <ashish.kalra@....com>
AuthorDate: Thu, 25 Jan 2024 22:11:15 -06:00
Committer: Borislav Petkov (AMD) <bp@...en8.de>
CommitterDate: Mon, 29 Jan 2024 20:34:18 +01:00
x86/sev: Introduce an SNP leaked pages list
Pages are unsafe to be released back to the page-allocator if they
have been transitioned to firmware/guest state and can't be reclaimed
or transitioned back to hypervisor/shared state. In this case, add them
to an internal leaked pages list to ensure that they are not freed or
touched/accessed to cause fatal page faults.
[ mdr: Relocate to arch/x86/virt/svm/sev.c ]
Suggested-by: Vlastimil Babka <vbabka@...e.cz>
Signed-off-by: Ashish Kalra <ashish.kalra@....com>
Signed-off-by: Michael Roth <michael.roth@....com>
Signed-off-by: Borislav Petkov (AMD) <bp@...en8.de>
Reviewed-by: Vlastimil Babka <vbabka@...e.cz>
Link: https://lore.kernel.org/r/20240126041126.1927228-16-michael.roth@amd.com
---
arch/x86/include/asm/sev.h | 2 ++-
arch/x86/virt/svm/sev.c | 37 +++++++++++++++++++++++++++++++++++++-
2 files changed, 39 insertions(+)
diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index 57fd95a..60de1b4 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -264,6 +264,7 @@ void snp_dump_hva_rmpentry(unsigned long address);
int psmash(u64 pfn);
int rmp_make_private(u64 pfn, u64 gpa, enum pg_level level, u32 asid, bool immutable);
int rmp_make_shared(u64 pfn, enum pg_level level);
+void snp_leak_pages(u64 pfn, unsigned int npages);
#else
static inline bool snp_probe_rmptable_info(void) { return false; }
static inline int snp_lookup_rmpentry(u64 pfn, bool *assigned, int *level) { return -ENODEV; }
@@ -275,6 +276,7 @@ static inline int rmp_make_private(u64 pfn, u64 gpa, enum pg_level level, u32 as
return -ENODEV;
}
static inline int rmp_make_shared(u64 pfn, enum pg_level level) { return -ENODEV; }
+static inline void snp_leak_pages(u64 pfn, unsigned int npages) {}
#endif
#endif
diff --git a/arch/x86/virt/svm/sev.c b/arch/x86/virt/svm/sev.c
index 5566fb0..0dffbf3 100644
--- a/arch/x86/virt/svm/sev.c
+++ b/arch/x86/virt/svm/sev.c
@@ -65,6 +65,11 @@ static u64 probed_rmp_base, probed_rmp_size;
static struct rmpentry *rmptable __ro_after_init;
static u64 rmptable_max_pfn __ro_after_init;
+static LIST_HEAD(snp_leaked_pages_list);
+static DEFINE_SPINLOCK(snp_leaked_pages_list_lock);
+
+static unsigned long snp_nr_leaked_pages;
+
#undef pr_fmt
#define pr_fmt(fmt) "SEV-SNP: " fmt
@@ -515,3 +520,35 @@ int rmp_make_shared(u64 pfn, enum pg_level level)
return rmpupdate(pfn, &state);
}
EXPORT_SYMBOL_GPL(rmp_make_shared);
+
+void snp_leak_pages(u64 pfn, unsigned int npages)
+{
+ struct page *page = pfn_to_page(pfn);
+
+ pr_warn("Leaking PFN range 0x%llx-0x%llx\n", pfn, pfn + npages);
+
+ spin_lock(&snp_leaked_pages_list_lock);
+ while (npages--) {
+
+ /*
+ * Reuse the page's buddy list for chaining into the leaked
+ * pages list. This page should not be on a free list currently
+ * and is also unsafe to be added to a free list.
+ */
+ if (likely(!PageCompound(page)) ||
+
+ /*
+ * Skip inserting tail pages of compound page as
+ * page->buddy_list of tail pages is not usable.
+ */
+ (PageHead(page) && compound_nr(page) <= npages))
+ list_add_tail(&page->buddy_list, &snp_leaked_pages_list);
+
+ dump_rmpentry(pfn);
+ snp_nr_leaked_pages++;
+ pfn++;
+ page++;
+ }
+ spin_unlock(&snp_leaked_pages_list_lock);
+}
+EXPORT_SYMBOL_GPL(snp_leak_pages);
Powered by blists - more mailing lists