lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1406035239.3526.66.camel@tkhai>
Date:	Tue, 22 Jul 2014 17:20:39 +0400
From:	Kirill Tkhai <ktkhai@...allels.com>
To:	Steven Rostedt <rostedt@...dmis.org>
CC:	Peter Zijlstra <peterz@...radead.org>,
	<linux-kernel@...r.kernel.org>,
	Mike Galbraith <umgwanakikbuti@...il.com>,
	Tim Chen <tim.c.chen@...ux.intel.com>,
	Nicolas Pitre <nicolas.pitre@...aro.org>,
	Ingo Molnar <mingo@...nel.org>, Paul Turner <pjt@...gle.com>,
	<tkhai@...dex.ru>, Oleg Nesterov <oleg@...hat.com>
Subject: Re: [PATCH 2/5] sched: Teach scheduler to understand ONRQ_MIGRATING
 state

В Вт, 22/07/2014 в 08:25 -0400, Steven Rostedt пишет:
> On Tue, 22 Jul 2014 13:45:42 +0200
> Peter Zijlstra <peterz@...radead.org> wrote:
> 
>  
> > > @@ -1491,10 +1491,14 @@ static void ttwu_activate(struct rq *rq, struct task_struct *p, int en_flags)
> > >  static void
> > >  ttwu_do_wakeup(struct rq *rq, struct task_struct *p, int wake_flags)
> > >  {
> > > -	check_preempt_curr(rq, p, wake_flags);
> > >  	trace_sched_wakeup(p, true);
> > >  
> > >  	p->state = TASK_RUNNING;
> > > +
> > > +	if (!task_queued(p))
> > > +		return;
> > 
> > How can this happen? we're in the middle of a wakeup, we're just added
> > the task to the rq and are still holding the appropriate rq->lock.
> 
> I believe it can be in the migrating state. A comment would be useful
> here.

Sure, I'll update. Stupid question: should I resend all series or one message
in this thread is enough?

> 
> > > @@ -4623,9 +4629,14 @@ int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
> > >  	struct rq *rq;
> > >  	unsigned int dest_cpu;
> > >  	int ret = 0;
> > > -
> > > +again:
> > >  	rq = task_rq_lock(p, &flags);
> > >  
> > > +	if (unlikely(p->on_rq) == ONRQ_MIGRATING) {
> > > +		task_rq_unlock(rq, p, &flags);
> > > +		goto again;
> > > +	}
> > > +
> > >  	if (cpumask_equal(&p->cpus_allowed, new_mask))
> > >  		goto out;
> > >  
> > 
> > That looks like a non-deterministic spin loop, 'waiting' for the
> > migration to finish. Not particularly nice and something I think we
> > should avoid for it has bad (TM) worst case behaviour.
> 
> As this patch doesn't introduce the MIGRATING getting set yet, I'd be
> interested in this too. I'm assuming that the MIGRATING flag is only
> set and then cleared within an interrupts disabled section, such that
> the time is no more than a spinlock being taken.
> 
> I would also add a cpu_relax() there too.

Sadly, I did't completely understand what you mean. Could you please explain
what has to be changed?

(I see wrong unlikely() place. It's an error. Is the other thing that there
is no task_migrating() method?)


Thanks,
Kirill

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ