[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e6675835-46ca-4183-86ce-008fde928e73@amd.com>
Date: Wed, 6 Mar 2024 15:52:34 -0600
From: "Naik, Avadhut" <avadnaik@....com>
To: Tony Luck <tony.luck@...el.com>, Borislav Petkov <bp@...en8.de>
Cc: "Mehta, Sohil" <sohil.mehta@...el.com>, "x86@...nel.org"
<x86@...nel.org>, "linux-edac@...r.kernel.org" <linux-edac@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"yazen.ghannam@....com" <yazen.ghannam@....com>,
Avadhut Naik <avadhut.naik@....com>
Subject: [PATCH] x86/mce: Dynamically size space for machine check records
Hi,
On 2/28/2024 17:14, Tony Luck wrote:
> Systems with a large number of CPUs may generate a large
> number of machine check records when things go seriously
> wrong. But Linux has a fixed buffer that can only capture
> a few dozen errors.
>
> Allocate space based on the number of CPUs (with a minimum
> value based on the historical fixed buffer that could store
> 80 records).
>
> Signed-off-by: Tony Luck <tony.luck@...el.com>
> ---
>
> Discussion earlier concluded with the realization that it is
> safe to dynamically allocate the mce_evt_pool at boot time.
> So here's a patch to do that. Scaling algorithm here is a
> simple linear "4 records per possible CPU" with a minimum
> of 80 to match the legacy behavior. I'm open to other
> suggestions.
>
> Note that I threw in a "+1" to the return from ilog2() when
> calling gen_pool_create(). From reading code, and running
> some tests, it appears that the min_alloc_order argument
> needs to be large enough to allocate one of the mce_evt_llist
> structures.
>
> Some other gen_pool users in Linux may also need this "+1".
>
> arch/x86/kernel/cpu/mce/genpool.c | 22 ++++++++++++++++------
> 1 file changed, 16 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/mce/genpool.c b/arch/x86/kernel/cpu/mce/genpool.c
> index fbe8b61c3413..a1f0a8f29cf5 100644
> --- a/arch/x86/kernel/cpu/mce/genpool.c
> +++ b/arch/x86/kernel/cpu/mce/genpool.c
> @@ -16,14 +16,13 @@
> * used to save error information organized in a lock-less list.
> *
> * This memory pool is only to be used to save MCE records in MCE context.
> - * MCE events are rare, so a fixed size memory pool should be enough. Use
> - * 2 pages to save MCE events for now (~80 MCE records at most).
> + * MCE events are rare, so a fixed size memory pool should be enough.
> + * Allocate on a sliding scale based on number of CPUs.
> */
> -#define MCE_POOLSZ (2 * PAGE_SIZE)
> +#define MCE_MIN_ENTRIES 80
>
> static struct gen_pool *mce_evt_pool;
> static LLIST_HEAD(mce_event_llist);
> -static char gen_pool_buf[MCE_POOLSZ];
>
> /*
> * Compare the record "t" with each of the records on list "l" to see if
> @@ -118,14 +117,25 @@ int mce_gen_pool_add(struct mce *mce)
>
> static int mce_gen_pool_create(void)
> {
> + int mce_numrecords, mce_poolsz;
> struct gen_pool *tmpp;
> int ret = -ENOMEM;
> + void *mce_pool;
> + int order;
>
> - tmpp = gen_pool_create(ilog2(sizeof(struct mce_evt_llist)), -1);
> + order = ilog2(sizeof(struct mce_evt_llist)) + 1;
> + tmpp = gen_pool_create(order, -1);
> if (!tmpp)
> goto out;
>
> - ret = gen_pool_add(tmpp, (unsigned long)gen_pool_buf, MCE_POOLSZ, -1);
> + mce_numrecords = max(80, num_possible_cpus() * 4);
Per Boris's below suggestion, shouldn't this be:
mce_numrecords = max(80, num_possible_cpus() * 16);
>> min(4*PAGE_SIZE, num_possible_cpus() * PAGE_SIZE);
>
> max() ofc.
>
>> There's a sane minimum and one page pro logical CPU should be fine on
>> pretty much every configuration...
4 MCE records per CPU equates to 1024 bytes, considering the genpool intrinsic
behavior you explained in the other subthread.
Apart from this, tested the patch on a couple of AMD systems. Didn't observe any
issues.
> + mce_poolsz = mce_numrecords * (1 << order);
> + mce_pool = kmalloc(mce_poolsz, GFP_KERNEL);
> + if (!mce_pool) {
> + gen_pool_destroy(tmpp);
> + goto out;
> + }
> + ret = gen_pool_add(tmpp, (unsigned long)mce_pool, mce_poolsz, -1);
> if (ret) {
> gen_pool_destroy(tmpp);
> goto out;
--
Thanks,
Avadhut Naik
Powered by blists - more mailing lists