lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 7 Dec 2020 09:38:27 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Alexey Klimov <aklimov@...hat.com>
Cc:     linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
        yury.norov@...il.com, tglx@...utronix.de, jobaker@...hat.com,
        audralmitchel@...il.com, arnd@...db.de, gregkh@...uxfoundation.org,
        rafael@...nel.org, tj@...nel.org, lizefan@...wei.com,
        qais.yousef@....com, hannes@...xchg.org, klimov.linux@...il.com
Subject: Re: [RFC][PATCH] cpu/hotplug: wait for cpuset_hotplug_work to finish
 on cpu online

On Thu, Dec 03, 2020 at 05:14:31PM +0000, Alexey Klimov wrote:
> When a CPU offlined and onlined via device_offline() and device_online()
> the userspace gets uevent notification. If, after receiving uevent,
> userspace executes sched_setaffinity() on some task trying to move it
> to a recently onlined CPU, then it will fail with -EINVAL. Userspace needs
> to wait around 5..30 ms before sched_setaffinity() will succeed for
> recently onlined CPU after receiving uevent.

Right.

>  Unfortunately, the execution time of
> echo 1 > /sys/devices/system/cpu/cpuX/online roughly doubled with this
> change (on my test machine).

Nobody cares, it's hotplug, it's supposed to be slow :-) That is,
we fundamentally shift the work _to_ the hotplug path, so as to keep
everybody else fast.

> The nature of this bug is also described here (with different consequences):
> https://lore.kernel.org/lkml/20200211141554.24181-1-qais.yousef@arm.com/

Yeah, pesky deadlocks.. someone was going to try again.


>  kernel/cpu.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/kernel/cpu.c b/kernel/cpu.c
> index 6ff2578ecf17..f39a27a7f24b 100644
> --- a/kernel/cpu.c
> +++ b/kernel/cpu.c
> @@ -15,6 +15,7 @@
>  #include <linux/sched/smt.h>
>  #include <linux/unistd.h>
>  #include <linux/cpu.h>
> +#include <linux/cpuset.h>
>  #include <linux/oom.h>
>  #include <linux/rcupdate.h>
>  #include <linux/export.h>
> @@ -1275,6 +1276,8 @@ static int cpu_up(unsigned int cpu, enum cpuhp_state target)
>  	}
>  
>  	err = _cpu_up(cpu, 0, target);
> +	if (!err)
> +		cpuset_wait_for_hotplug();
>  out:
>  	cpu_maps_update_done();
>  	return err;

My only consideration is if doing that flush under cpu_add_remove_lock()
is wise.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ