lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Z73JRsuderQIW1HT@gmail.com>
Date: Tue, 25 Feb 2025 14:44:38 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Maksim Davydov <davydov-max@...dex-team.ru>
Cc: linux-kernel@...r.kernel.org, x86@...nel.org,
	den-plotnikov@...dex-team.ru, gpiccoli@...lia.com,
	tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
	dave.hansen@...ux.intel.com, hpa@...or.com
Subject: Re: [PATCH RESEND v4] x86/split_lock: fix delayed detection enabling


* Maksim Davydov <davydov-max@...dex-team.ru> wrote:

> If the warn mode with disabled mitigation mode is used, then on each
> CPU where the split lock occurred detection will be disabled in order to
> make progress and delayed work will be scheduled, which then will enable
> detection back. Now it turns out that all CPUs use one global delayed
> work structure. This leads to the fact that if a split lock occurs on
> several CPUs at the same time (within 2 jiffies), only one CPU will
> schedule delayed work, but the rest will not. The return value of
> schedule_delayed_work_on() would have shown this, but it is not checked
> in the code.
> 
> A diagram that can help to understand the bug reproduction:
> https://lore.kernel.org/all/2cd54041-253b-4e78-b8ea-dbe9b884ff9b@yandex-team.ru/
> 
> In order to fix the warn mode with disabled mitigation mode, delayed work
> has to be a per-CPU.
> 
> v4 -> v3:
> * rebased the patch onto the latest master
> 
> v3 -> v2:
> * place and time of the per-CPU structure initialization were changed.
>   initcall doesn't seem to be a good place for it, so deferred
>   initialization is used.
> 
> Fixes: 727209376f49 ("x86/split_lock: Add sysctl to control the misery mode")
> Signed-off-by: Maksim Davydov <davydov-max@...dex-team.ru>
> Tested-by: Guilherme G. Piccoli <gpiccoli@...lia.com>
> ---
>  arch/x86/kernel/cpu/bus_lock.c | 20 ++++++++++++++++----
>  1 file changed, 16 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/bus_lock.c b/arch/x86/kernel/cpu/bus_lock.c
> index 704e9241b964..b72235c8db3e 100644
> --- a/arch/x86/kernel/cpu/bus_lock.c
> +++ b/arch/x86/kernel/cpu/bus_lock.c
> @@ -192,7 +192,13 @@ static void __split_lock_reenable(struct work_struct *work)
>  {
>  	sld_update_msr(true);
>  }
> -static DECLARE_DELAYED_WORK(sl_reenable, __split_lock_reenable);
> +/*
> + * In order for each CPU to schedule itself delayed work independently of the
> + * others, delayed work struct should be per-CPU. This is not required when
> + * sysctl_sld_mitigate is enabled because of the semaphore, that limits
> + * the number of simultaneously scheduled delayed works to 1.
> + */
> +static DEFINE_PER_CPU(struct delayed_work, sl_reenable);
>  
>  /*
>   * If a CPU goes offline with pending delayed work to re-enable split lock
> @@ -213,7 +219,7 @@ static int splitlock_cpu_offline(unsigned int cpu)
>  
>  static void split_lock_warn(unsigned long ip)
>  {
> -	struct delayed_work *work;
> +	struct delayed_work *work = NULL;
>  	int cpu;
>  
>  	if (!current->reported_split_lock)
> @@ -235,11 +241,17 @@ static void split_lock_warn(unsigned long ip)
>  		if (down_interruptible(&buslock_sem) == -EINTR)
>  			return;
>  		work = &sl_reenable_unlock;
> -	} else {
> -		work = &sl_reenable;
>  	}
>  
>  	cpu = get_cpu();
> +
> +	if (!work) {
> +		work = this_cpu_ptr(&sl_reenable);
> +		/* Deferred initialization of per-CPU struct */
> +		if (!work->work.func)
> +			INIT_DELAYED_WORK(work, __split_lock_reenable);

The INIT_DELAYED_WORK() call shouldn't be done here, but in some sort 
of init function. The control flow is complicated enough already.

Also, this_cpu_ptr() isn't necessary, we already have 'cpu' as the 
current CPU. per_cpu() ought to be enough.

Thanks,

	Ingo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ