lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1294311418.2016.326.camel@laptop>
Date:	Thu, 06 Jan 2011 11:56:58 +0100
From:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
To:	Oleg Nesterov <oleg@...hat.com>
Cc:	Chris Mason <chris.mason@...cle.com>,
	Frank Rowand <frank.rowand@...sony.com>,
	Ingo Molnar <mingo@...e.hu>,
	Thomas Gleixner <tglx@...utronix.de>,
	Mike Galbraith <efault@....de>, Paul Turner <pjt@...gle.com>,
	Jens Axboe <axboe@...nel.dk>,
	Yong Zhang <yong.zhang0@...il.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH 18/18] sched: Sort hotplug vs ttwu queueing

On Wed, 2011-01-05 at 21:47 +0100, Oleg Nesterov wrote:
> On 01/04, Peter Zijlstra wrote:
> >
> > +#ifdef CONFIG_HOTPLUG_CPU
> > +static void ttwu_queue_unplug(struct rq *rq)
> > +{
> > +	struct task_struct *p, *list = xchg(&rq->wake_list, NULL);
> > +	unsigned long flags;
> > +	int cpu;
> > +
> > +	if (!list)
> > +		return;
> > +
> > +	while (list) {
> > +		p = list;
> > +		list = list->wake_entry;
> > +
> > +		raw_spin_lock_irqsave(&p->pi_lock, flags);
> > +		cpu = select_task_rq(p, SD_BALANCE_WAKE, 0);
> > +		set_task_cpu(p, cpu);
> > +		ttwu_queue(p, cpu);
> > +		raw_spin_unlock_irqrestore(&p->pi_lock, flags);
> > +	}
> > +}
> > +#endif
> > +
> >  /**
> >   * try_to_wake_up - wake up a thread
> >   * @p: the thread to be awakened
> > @@ -6151,6 +6174,11 @@ migration_call(struct notifier_block *nf
> >  		migrate_nr_uninterruptible(rq);
> >  		calc_global_load_remove(rq);
> >  		break;
> > +
> > +	case CPU_DEAD:
> > +		ttwu_queue_unplug(cpu_rq(cpu));
> 
> I think this is not strictly needed...
> 
> Afaics, take_cpu_down() can simply call sched_ttwu_pending() at the
> start. This will activate the pending tasks on the (almost) dead
> cpu but we don't care, they will be migrated later.

At the end, __cpu_disable() is the one clearing all the cpu mask bits.
So we could call it from CPU_DYING I guess.

> When __stop_machine(take_cpu_down) returns nobody can use this CPU
> as a target, iow rq->wake_list can't be used again.

Right, so something like the below should suffice, I've folded that into
patch 17.

---
Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c
+++ linux-2.6/kernel/sched.c
@@ -6129,6 +6129,7 @@ migration_call(struct notifier_block *nf
 
 #ifdef CONFIG_HOTPLUG_CPU
 	case CPU_DYING:
+		sched_ttwu_pending();
 		/* Update our root-domain */
 		raw_spin_lock_irqsave(&rq->lock, flags);
 		if (rq->rd) {

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ