lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 31 May 2021 12:21:13 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     linux-kernel@...r.kernel.org
Cc:     linux-tip-commits@...r.kernel.org,
        Yejune Deng <yejune.deng@...il.com>,
        Valentin Schneider <valentin.schneider@....com>, x86@...nel.org
Subject: [PATCH] sched,init: Fix DEBUG_PREEMPT vs early boot

On Wed, May 19, 2021 at 09:02:34AM -0000, tip-bot2 for Yejune Deng wrote:
> The following commit has been merged into the sched/core branch of tip:
> 
> Commit-ID:     570a752b7a9bd03b50ad6420cd7f10092cc11bd3
> Gitweb:        https://git.kernel.org/tip/570a752b7a9bd03b50ad6420cd7f10092cc11bd3
> Author:        Yejune Deng <yejune.deng@...il.com>
> AuthorDate:    Mon, 10 May 2021 16:10:24 +01:00
> Committer:     Peter Zijlstra <peterz@...radead.org>
> CommitterDate: Wed, 19 May 2021 10:51:40 +02:00
> 
> lib/smp_processor_id: Use is_percpu_thread() instead of nr_cpus_allowed
> 
> is_percpu_thread() more elegantly handles SMP vs UP, and further checks the
> presence of PF_NO_SETAFFINITY. This lets us catch cases where
> check_preemption_disabled() can race with a concurrent sched_setaffinity().
> 
> Signed-off-by: Yejune Deng <yejune.deng@...il.com>
> [Amended changelog]
> Signed-off-by: Valentin Schneider <valentin.schneider@....com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> Link: https://lkml.kernel.org/r/20210510151024.2448573-3-valentin.schneider@arm.com
> ---
>  lib/smp_processor_id.c | 6 +-----
>  1 file changed, 1 insertion(+), 5 deletions(-)
> 
> diff --git a/lib/smp_processor_id.c b/lib/smp_processor_id.c
> index 1c1dbd3..046ac62 100644
> --- a/lib/smp_processor_id.c
> +++ b/lib/smp_processor_id.c
> @@ -19,11 +19,7 @@ unsigned int check_preemption_disabled(const char *what1, const char *what2)
>  	if (irqs_disabled())
>  		goto out;
>  
> -	/*
> -	 * Kernel threads bound to a single CPU can safely use
> -	 * smp_processor_id():
> -	 */
> -	if (current->nr_cpus_allowed == 1)
> +	if (is_percpu_thread())
>  		goto out;

So my test box was unhappy with all this and started spewing lots of
DEBUG_PREEMPT warns on boot.

This extends 8fb12156b8db6 to cover the new requirement.

---
Subject: sched,init: Fix DEBUG_PREEMPT vs early boot

Extend 8fb12156b8db ("init: Pin init task to the boot CPU, initially")
to cover the new PF_NO_SETAFFINITY requirement.

While there, move wait_for_completion(&kthreadd_done) into kernel_init()
to make it absolutely clear it is the very first thing done by the init
thread.

Fixes: 570a752b7a9b ("lib/smp_processor_id: Use is_percpu_thread() instead of nr_cpus_allowed")
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
 init/main.c         | 11 ++++++-----
 kernel/sched/core.c |  1 +
 2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/init/main.c b/init/main.c
index 7b027d9c5c89..e945ec82b8a5 100644
--- a/init/main.c
+++ b/init/main.c
@@ -692,6 +692,7 @@ noinline void __ref rest_init(void)
 	 */
 	rcu_read_lock();
 	tsk = find_task_by_pid_ns(pid, &init_pid_ns);
+	tsk->flags |= PF_NO_SETAFFINITY;
 	set_cpus_allowed_ptr(tsk, cpumask_of(smp_processor_id()));
 	rcu_read_unlock();
 
@@ -1440,6 +1441,11 @@ static int __ref kernel_init(void *unused)
 {
 	int ret;
 
+	/*
+	 * Wait until kthreadd is all set-up.
+	 */
+	wait_for_completion(&kthreadd_done);
+
 	kernel_init_freeable();
 	/* need to finish all async __init code before freeing the memory */
 	async_synchronize_full();
@@ -1520,11 +1526,6 @@ void __init console_on_rootfs(void)
 
 static noinline void __init kernel_init_freeable(void)
 {
-	/*
-	 * Wait until kthreadd is all set-up.
-	 */
-	wait_for_completion(&kthreadd_done);
-
 	/* Now the scheduler is fully set up and can do blocking allocations */
 	gfp_allowed_mask = __GFP_BITS_MASK;
 
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index adea0b1e8036..ae7737e6c2b2 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8867,6 +8867,7 @@ void __init sched_init_smp(void)
 	/* Move init over to a non-isolated CPU */
 	if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_FLAG_DOMAIN)) < 0)
 		BUG();
+	current->flags &= ~PF_NO_SETAFFINITY;
 	sched_init_granularity();
 
 	init_sched_rt_class();

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ