lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 7 Mar 2008 15:01:26 -0800
From:	Suresh Siddha <suresh.b.siddha@...el.com>
To:	"Rafael J. Wysocki" <rjw@...k.pl>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Dmitry Adamushko <dmitry.adamushko@...il.com>, ego@...ibm.com,
	mingo@...e.hu, oleg@...sign.ru, yi.y.yang@...el.com,
	linux-kernel@...r.kernel.org, tglx@...utronix.de
Subject: Re: [BUG 2.6.25-rc3] scheduler/hotplug: some processes are dealocked when cpu is set to offline

On Fri, Mar 07, 2008 at 10:36:25PM +0100, Rafael J. Wysocki wrote:
> On Friday, 7 of March 2008, Andrew Morton wrote:
> > -		sched_setscheduler(p, SCHED_FIFO, &param);
> > +//		sched_setscheduler(p, SCHED_FIFO, &param);
> >  		kthread_stop(p);
> >  		takeover_tasklets(hotcpu);
> >  		break;
> > _
> > 
> > fixes the wont-power-off regression.
> > 
> > But 2.6.24 runs the watchdog threads SCHED_FIFO too.  Are you saying that
> > it's the migration code which changed? 
> 
> Well, that would be a problem for suspend/hibernation if there were SCHED_FIFO
> non-freezable tasks bound to the nonboot CPUs.  I'm not aware of any, but ...
> 
> Also, it should be possible to just offline one or more CPUs using the sysfs
> interface at any time and what happens if there are any RT tasks bound to these
> CPUs at that time?  That would be the $subject problem, IMHO.
> 
> Anyone who made the change affecting __migrate_task() so it's unable to migrate
> RT tasks any more should have fixed the CPU hotplug as well.

No. Its not the issue with __migrate_task(). Appended patch fixes my issue.
Recent RT wakeup balance code changes exposed a bug in migration_call() code
path.

Andrew, Please check if the appended patch fixes your power-off problem aswell.

thanks,
suresh
---

Handle the `CPU_DOWN_PREPARE_FROZEN' case in migration_call().

Otherwise, without this, we don't clear the cpu going down in the
root domains "online" mask. This was causing the RT tasks to be woken
up on already dead cpus, causing system hangs during standby, shutdown etc.

For example, on my system, this is the failing sequence:

kthread_stop() // coming from the cpu_callback's
    wake_up_process()
	sched_class->select_task_rq(); //select_task_rq_rt
	    find_lowest_rq
		find_lowest_cpus
	    	    cpus_and(*lowest_mask, task_rq(task)->rd->online, task->cpus_allowed);

In my case tasks->cpus_allowed is set to cpu_possible_map and because of the
this bug, rd->online still shows the dead cpu. Resulting in
find_lowest_rq() return an offlined cpu, because of which RT task gets woken
up on a DEAD cpu, causing various hangs.

This issue doesn't happen with normal tasks because, the select_task_rq_fair()
chooses between only two cpu's (the cpu which is waking up the task or last run
cpu (task_cpu()), kernel hotplug code makes sure that both of which always
represent the online CPU's).

Why it didn't show up in 2.6.24? Because the new wakeup code is using a
complex CPU selection logic in select_task_rq_rt() which exposed this
bug in migration_call()

Signed-off-by: Suresh Siddha <suresh.b.siddha@...el.com>
---

diff --git a/kernel/sched.c b/kernel/sched.c
index 52b9867..60550d8 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -5882,6 +5882,7 @@ migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu)
 		break;
 
 	case CPU_DOWN_PREPARE:
+	case CPU_DOWN_PREPARE_FROZEN:
 		/* Update our root-domain */
 		rq = cpu_rq(cpu);
 		spin_lock_irqsave(&rq->lock, flags);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ