[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240307000256.34352-1-tony.luck@intel.com>
Date: Wed, 6 Mar 2024 16:02:56 -0800
From: Tony Luck <tony.luck@...el.com>
To: Borislav Petkov <bp@...en8.de>
Cc: "Naik, Avadhut" <avadnaik@....com>,
"Mehta, Sohil" <sohil.mehta@...el.com>,
"Yazen Ghannam" <yazen.ghannam@....com>,
x86@...nel.org,
linux-edac@...r.kernel.org,
linux-kernel@...r.kernel.org,
Tony Luck <tony.luck@...el.com>
Subject: [PATCH v2] x86/mce: Dynamically size space for machine check records
Systems with a large number of CPUs may generate a large
number of machine check records when things go seriously
wrong. But Linux has a fixed buffer that can only capture
a few dozen errors.
Allocate space based on the number of CPUs (with a minimum
value based on the historical fixed buffer that could store
80 records).
Signed-off-by: Tony Luck <tony.luck@...el.com>
---
Changes since v1: Link: https://lore.kernel.org/all/Zd--PJp-NbXGrb39@agluck-desk3/
Sohil:
Group declaration of "order" with other int's in mce_gen_pool_create()
Use #define MCE_MIN_ENTRIES instead of hard-coded inline "80"
Missed kfree(mce_pool) in error path.
Yazen:
Use order_base_2() instead of ilog2() as rounded up size of
structure is needed.
Avadhut:
Allocate 2 records per CPU
Me:
Add a #define MCE_PER_CPU for number of records per CPU
arch/x86/kernel/cpu/mce/genpool.c | 23 +++++++++++++++++------
1 file changed, 17 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/genpool.c b/arch/x86/kernel/cpu/mce/genpool.c
index fbe8b61c3413..42ce3dc97ca8 100644
--- a/arch/x86/kernel/cpu/mce/genpool.c
+++ b/arch/x86/kernel/cpu/mce/genpool.c
@@ -16,14 +16,14 @@
* used to save error information organized in a lock-less list.
*
* This memory pool is only to be used to save MCE records in MCE context.
- * MCE events are rare, so a fixed size memory pool should be enough. Use
- * 2 pages to save MCE events for now (~80 MCE records at most).
+ * MCE events are rare, so a fixed size memory pool should be enough.
+ * Allocate on a sliding scale based on number of CPUs.
*/
-#define MCE_POOLSZ (2 * PAGE_SIZE)
+#define MCE_MIN_ENTRIES 80
+#define MCE_PER_CPU 2
static struct gen_pool *mce_evt_pool;
static LLIST_HEAD(mce_event_llist);
-static char gen_pool_buf[MCE_POOLSZ];
/*
* Compare the record "t" with each of the records on list "l" to see if
@@ -118,16 +118,27 @@ int mce_gen_pool_add(struct mce *mce)
static int mce_gen_pool_create(void)
{
+ int mce_numrecords, mce_poolsz, order;
struct gen_pool *tmpp;
int ret = -ENOMEM;
+ void *mce_pool;
- tmpp = gen_pool_create(ilog2(sizeof(struct mce_evt_llist)), -1);
+ order = order_base_2(sizeof(struct mce_evt_llist));
+ tmpp = gen_pool_create(order, -1);
if (!tmpp)
goto out;
- ret = gen_pool_add(tmpp, (unsigned long)gen_pool_buf, MCE_POOLSZ, -1);
+ mce_numrecords = max(MCE_MIN_ENTRIES, num_possible_cpus() * MCE_PER_CPU);
+ mce_poolsz = mce_numrecords * (1 << order);
+ mce_pool = kmalloc(mce_poolsz, GFP_KERNEL);
+ if (!mce_pool) {
+ gen_pool_destroy(tmpp);
+ goto out;
+ }
+ ret = gen_pool_add(tmpp, (unsigned long)mce_pool, mce_poolsz, -1);
if (ret) {
gen_pool_destroy(tmpp);
+ kfree(mce_pool);
goto out;
}
base-commit: d206a76d7d2726f3b096037f2079ce0bd3ba329b
--
2.43.0
Powered by blists - more mailing lists