lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 12 Oct 2013 19:06:56 +0200
From:	Oleg Nesterov <oleg@...hat.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Ingo Molnar <mingo@...nel.org>,
	"Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>,
	Paul McKenney <paulmck@...ux.vnet.ibm.com>,
	Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>,
	Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Johannes Weiner <hannes@...xchg.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/6] Optimize the cpu hotplug locking -v2

On 10/11, Peter Zijlstra wrote:
>
> On Fri, Oct 11, 2013 at 08:25:07PM +0200, Oleg Nesterov wrote:
> > On 10/11, Peter Zijlstra wrote:
> > >
> > > As a penance I'll start by removing all get_online_cpus() usage from the
> > > scheduler.
> >
> > I only looked at the change in setaffinity,
> >
> > > @@ -3706,7 +3707,6 @@ long sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
> > >  	struct task_struct *p;
> > >  	int retval;
> > >
> > > -	get_online_cpus();
> > >  	rcu_read_lock();
> >
> > Hmm. In theory task_rq_lock() doesn't imply rcu-lock, so
> > set_cpus_allowed_ptr() can miss the change in cpu_active_mask. But this
> > is probably fine, CPU_DYING does __migrate_task().
>
> I'm fine with always doing sync_sched(); sync_rcu(); if that makes you
> feel better.

No, I was just curious. iow, I am asking, not arguing.

> But I thought that assuming that !PREEMPT sync_rcu() would
> imply sync_sched() was ok. I think the comment there even says as much.
>
> And task_rq_lock() will very much disable preemption; and thus we get
> what we want, right?

it even disables irqs, so this should always imply rcu_read_lock() with
any implementation, I guess. However I was told we should not rely on
this, and say posix_timer_event() does rcu_read_lock() even it is always
called under spin_lock_irq().

But what I actually tried to say, it seems that this particular change
looks fine even if cpu_down() doesn't do sync_sched/rcu at all, because
we can rely on __migrate_task(). IOW, if we race with DOWN_PREPARE and
miss set_cpu_active(false) we can pretend that this CPU goes down later.

> In any case; the goal was to make either RCU or preempt-disable
> sufficient.
>
> > However. This means that sched_setaffinity() can fail if it races with
> > the failing cpu_down() (say, __cpu_notify(CPU_DOWN_PREPARE) fails).
> > Probably we do not really care, just this looks a bit confusing.
>
> Couldn't be bothered; failing hotplug will have side-effects any which
> way.

OK.

> > > @@ -3827,12 +3825,11 @@ long sched_getaffinity(pid_t pid, struct cpumask *mask)
> > >  		goto out_unlock;
> > >
> > >  	raw_spin_lock_irqsave(&p->pi_lock, flags);
> > > -	cpumask_and(mask, &p->cpus_allowed, cpu_online_mask);
> > > +	cpumask_and(mask, &p->cpus_allowed, cpu_active_mask);
> >
> > But I am just curious, is this change is strictly needed?
>
> No; we could do without. It really doesn't matter much if anything. I
> only did it because sched_setaffinity()->set_cpus_allowed_ptr() checks
> against active, not online. And had a sudden urge to make get/set
> symmetric -- totally pointless otherwise.

OK, thanks, I was just curious.

In fact I do not even understand why getaffinity() doesn't simply
return ->cpus_allowed, but this is off-topic.

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists