lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4d6c291b-9b10-a4fe-6486-14b0a0e9ee86@oracle.com>
Date:   Tue, 26 Sep 2023 09:55:31 -0500
From:   Eric DeVolder <eric.devolder@...cle.com>
To:     Baoquan He <bhe@...hat.com>, linux-kernel@...r.kernel.org
Cc:     kexec@...ts.infradead.org, akpm@...ux-foundation.org,
        vschneid@...hat.com, sourabhjain@...ux.ibm.com
Subject: Re: [PATCH v3] Crash: add lock to serialize crash hotplug handling



On 9/26/23 07:09, Baoquan He wrote:
> Eric reported that handling corresponding crash hotplug event can be
> failed easily when many memory hotplug event are notified in a short
> period. They failed because failing to take __kexec_lock.
> 
> =======
> [   78.714569] Fallback order for Node 0: 0
> [   78.714575] Built 1 zonelists, mobility grouping on.  Total pages: 1817886
> [   78.717133] Policy zone: Normal
> [   78.724423] crash hp: kexec_trylock() failed, elfcorehdr may be inaccurate
> [   78.727207] crash hp: kexec_trylock() failed, elfcorehdr may be inaccurate
> [   80.056643] PEFILE: Unsigned PE binary
> =======
> 
> The memory hotplug events are notified very quickly and very many,
> while the handling of crash hotplug is much slower relatively. So the
> atomic variable __kexec_lock and kexec_trylock() can't guarantee the
> serialization of crash hotplug handling.
> 
> Here, add a new mutex lock __crash_hotplug_lock to serialize crash
> hotplug handling specifically. This doesn't impact the usage of
> __kexec_lock.
> 
> Signed-off-by: Baoquan He <bhe@...hat.com>

I've run this patch in my regression environment and I do not see any
lock failures! And I've done this with a variety of DIMM sizes up to 8GiB in
order to vary the "size of the swarm". Both with kexec_load and kexec_file_load.

Tested-by: Eric DeVolder <eric.devolder@...cle.com>
Reviewed-by: Eric DeVolder <eric.devolder@...cle.com>
> ---
> v2->v3:
>   - crash_check_update_elfcorehdr() need take __crash_hotplug_lock
>     too because there's tiny racing window when kexec_load interface
>     is taken. Eric pointed out this.
> v1->v2:
>   - Move mutex lock definition into CONFIG_CRASH_HOTPLUG ifdeffery
>     scope in kernel/crash_core.c because the lock is only needed and
>     used in that scope. Suggested by Eric.
>   kernel/crash_core.c | 17 +++++++++++++++++
>   1 file changed, 17 insertions(+)
> 
> diff --git a/kernel/crash_core.c b/kernel/crash_core.c
> index 03a7932cde0a..2f675ef045d4 100644
> --- a/kernel/crash_core.c
> +++ b/kernel/crash_core.c
> @@ -739,6 +739,17 @@ subsys_initcall(crash_notes_memory_init);
>   #undef pr_fmt
>   #define pr_fmt(fmt) "crash hp: " fmt
>   
> +/*
> + * Different than kexec/kdump loading/unloading/jumping/shrinking which
> + * usually rarely happen, there will be many crash hotplug events notified
> + * during one short period, e.g one memory board is hot added and memory
> + * regions are online. So mutex lock  __crash_hotplug_lock is used to
> + * serialize the crash hotplug handling specifically.
> + */
> +DEFINE_MUTEX(__crash_hotplug_lock);
> +#define crash_hotplug_lock() mutex_lock(&__crash_hotplug_lock)
> +#define crash_hotplug_unlock() mutex_unlock(&__crash_hotplug_lock)
> +
>   /*
>    * This routine utilized when the crash_hotplug sysfs node is read.
>    * It reflects the kernel's ability/permission to update the crash
> @@ -748,9 +759,11 @@ int crash_check_update_elfcorehdr(void)
>   {
>   	int rc = 0;
>   
> +	crash_hotplug_lock();
>   	/* Obtain lock while reading crash information */
>   	if (!kexec_trylock()) {
>   		pr_info("kexec_trylock() failed, elfcorehdr may be inaccurate\n");
> +		crash_hotplug_unlock();
>   		return 0;
>   	}
>   	if (kexec_crash_image) {
> @@ -761,6 +774,7 @@ int crash_check_update_elfcorehdr(void)
>   	}
>   	/* Release lock now that update complete */
>   	kexec_unlock();
> +	crash_hotplug_unlock();
>   
>   	return rc;
>   }
> @@ -783,9 +797,11 @@ static void crash_handle_hotplug_event(unsigned int hp_action, unsigned int cpu)
>   {
>   	struct kimage *image;
>   
> +	crash_hotplug_lock();
>   	/* Obtain lock while changing crash information */
>   	if (!kexec_trylock()) {
>   		pr_info("kexec_trylock() failed, elfcorehdr may be inaccurate\n");
> +		crash_hotplug_unlock();
>   		return;
>   	}
>   
> @@ -852,6 +868,7 @@ static void crash_handle_hotplug_event(unsigned int hp_action, unsigned int cpu)
>   out:
>   	/* Release lock now that update complete */
>   	kexec_unlock();
> +	crash_hotplug_unlock();
>   }
>   
>   static int crash_memhp_notifier(struct notifier_block *nb, unsigned long val, void *v)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ