lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 26 Jul 2017 05:57:15 -0700
From:   "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Tejun Heo <tj@...nel.org>, Ingo Molnar <mingo@...hat.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        linux-kernel@...r.kernel.org,
        Lai Jiangshan <jiangshanlai@...il.com>, kernel-team@...com
Subject: Re: [PATCH RFC] sched: Allow migrating kthreads into online but
 inactive CPUs

On Tue, Jul 25, 2017 at 06:58:21PM +0200, Peter Zijlstra wrote:
> Hi,
> 
> On Sat, Jun 17, 2017 at 08:10:08AM -0400, Tejun Heo wrote:
> > Per-cpu workqueues have been tripping CPU affinity sanity checks while
> > a CPU is being offlined.  A per-cpu kworker ends up running on a CPU
> > which isn't its target CPU while the CPU is online but inactive.
> > 
> > While the scheduler allows kthreads to wake up on an online but
> > inactive CPU, it doesn't allow a running kthread to be migrated to
> > such a CPU, which leads to an odd situation where setting affinity on
> > a sleeping and running kthread leads to different results.
> > 
> > Each mem-reclaim workqueue has one rescuer which guarantees forward
> > progress and the rescuer needs to bind itself to the CPU which needs
> > help in making forward progress; however, due to the above issue,
> > while set_cpus_allowed_ptr() succeeds, the rescuer doesn't end up on
> > the correct CPU if the CPU is in the process of going offline,
> > tripping the sanity check and executing the work item on the wrong
> > CPU.
> > 
> > This patch updates __migrate_task() so that kthreads can be migrated
> > into an inactive but online CPU.
> > 
> > Signed-off-by: Tejun Heo <tj@...nel.org>
> > Reported-by: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
> > Reported-by: Steven Rostedt <rostedt@...dmis.org>
> 
> Hmm.. so the rules for running on !active && online are slightly
> stricter than just being a kthread, how about the below, does that work
> too?

Of 24 one-hour runs of the TREE07 rcutorture scenario, two had stalled
tasks with this patch.  One of them had more than 200 instances, the other
two instances.  In contrast, a 30-hour run a week ago with Tejun's patch
completed cleanly.  Here "stalled task" means that one of rcutorture's
update-side kthreads fails to make any progress for more than 15 seconds.
Grace periods are progressing, but a kthread waiting for a grace period
isn't making progress, and is stuck with its ->state field at 0x402,
that is TASK_NOLOAD|TASK_UNINTERRUPTIBLE.  Which is as if it never got
the wakeup, given that it is sleeping on schedule_timeout_idle().

Now, two of 24 might just be bad luck, but I haven't seen anything like
this out of TREE07 since I queued Tejun's patch, so I am inclined to
view your patch below with considerable suspicion.

I -am- seeing this out of TREE01, even with Tejun's patch, but that
scenario sets maxcpu=8 and nr_cpus=43, which seems to be tickling an issue
that several other people are seeing.  Others' testing seems to indicate
that setting CONFIG_SOFTLOCKUP_DETECTOR=y suppresses this issue, but I
need to do an overnight run to check my test cases, and that is tonight.

So there might be something else going on as well.

							Thanx, Paul

>  kernel/sched/core.c | 36 ++++++++++++++++++++++++++++++------
>  1 file changed, 30 insertions(+), 6 deletions(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index d3d39a283beb..59b667c16826 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -894,6 +894,22 @@ void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags)
>  }
> 
>  #ifdef CONFIG_SMP
> +
> +/*
> + * Per-CPU kthreads are allowed to run on !actie && online CPUs, see
> + * __set_cpus_allowed_ptr() and select_fallback_rq().
> + */
> +static inline bool is_per_cpu_kthread(struct task_struct *p)
> +{
> +	if (!(p->flags & PF_KTHREAD))
> +		return false;
> +
> +	if (p->nr_cpus_allowed != 1)
> +		return false;
> +
> +	return true;
> +}
> +
>  /*
>   * This is how migration works:
>   *
> @@ -951,8 +967,13 @@ struct migration_arg {
>  static struct rq *__migrate_task(struct rq *rq, struct rq_flags *rf,
>  				 struct task_struct *p, int dest_cpu)
>  {
> -	if (unlikely(!cpu_active(dest_cpu)))
> -		return rq;
> +	if (is_per_cpu_kthread(p)) {
> +		if (unlikely(!cpu_online(dest_cpu)))
> +			return rq;
> +	} else {
> +		if (unlikely(!cpu_active(dest_cpu)))
> +			return rq;
> +	}
> 
>  	/* Affinity changed (again). */
>  	if (!cpumask_test_cpu(dest_cpu, &p->cpus_allowed))
> @@ -1482,10 +1503,13 @@ static int select_fallback_rq(int cpu, struct task_struct *p)
>  	for (;;) {
>  		/* Any allowed, online CPU? */
>  		for_each_cpu(dest_cpu, &p->cpus_allowed) {
> -			if (!(p->flags & PF_KTHREAD) && !cpu_active(dest_cpu))
> -				continue;
> -			if (!cpu_online(dest_cpu))
> -				continue;
> +			if (is_per_cpu_kthread(p)) {
> +				if (!cpu_online(dest_cpu))
> +					continue;
> +			} else {
> +				if (!cpu_active(dest_cpu))
> +					continue;
> +			}
>  			goto out;
>  		}
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ