lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150629084929.GE19282@twins.programming.kicks-ass.net>
Date:	Mon, 29 Jun 2015 10:49:29 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Oleg Nesterov <oleg@...hat.com>
Cc:	paulmck@...ux.vnet.ibm.com, tj@...nel.org, mingo@...hat.com,
	der.herr@...r.at, dave@...olabs.net, riel@...hat.com,
	viro@...IV.linux.org.uk, torvalds@...ux-foundation.org,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 0/6] stop_machine: kill stop_cpus_mutex and
 stop_cpus_lock

On Fri, Jun 26, 2015 at 10:46:12PM +0200, Oleg Nesterov wrote:
> > I would much prefer to keep stop_two_cpus() as proposed with taking two
> > cpu_stopper::lock instances and replacing the stop_cpu_mutex with a
> > percpu-rwsem.
> 
> OK, lets avoid cpumask in stop_two_cpus,
> 
> 	int stop_two_cpus(unsigned int cpu1, unsigned int cpu2, cpu_stop_fn_t fn, void *arg)
> 	{
> 		struct multi_stop_data msdata;
> 		struct cpu_stop_done done;
> 		struct cpu_stop_work *work1, *work2;
> 
> 		msdata = (struct multi_stop_data){
> 			.fn = fn,
> 			.data = arg,
> 			.num_threads = 2,
> 			.active_cpus = cpumask_of(cpu1),
> 		};
> 
> 		cpu_stop_init_done(&done, 2);
> 		set_state(&msdata, MULTI_STOP_PREPARE);
> 
> 		if (cpu1 > cpu2)
> 			swap(cpu1, cpu2);
> 		work1 = stop_work_alloc_one(cpu1, true);
> 		work2 = stop_work_alloc_one(cpu1, true);
> 
> 		*work1 = *work2 = (struct cpu_stop_work) {
> 			.fn = multi_cpu_stop,
> 			.arg = &msdata,
> 			.done = &done
> 		};
> 
> 		preempt_disable();
> 		cpu_stop_queue_work(cpu1, work1);
> 		cpu_stop_queue_work(cpu2, work2);
> 		preempt_enable();
> 
> 		wait_for_completion(&done.completion);
> 
> 		stop_work_free_one(cpu1);
> 		stop_work_free_one(cpu2);
> 		wake_up(&stop_work_wq);
> 
> 		return done.executed ? done.ret : -ENOENT;
> 	}
> 
> 2 cmpxchg()'s vs 2 spin_lock()'s. Plus wake_up(), but we can check
> waitqueue_active().
> 
> Do you think thi will be noticeably slower?

Nah, I suppose not. Either we wait on the 'mutex' for access to the work
or we wait on the completion.

> So I am still not convinced... But probably I am too biased ;)

I'm just a tad worried, I don't want to make the relatively cheap
operation of stop_two_cpus() more expensive to the benefit of
stop_cpus().

> Btw. I can't understand the cpu_active() checks in stop_two_cpus().
> Do we really need them?

The comment is misleading and part of an earlier attempt to avoid the
deadlock I think, but I suspect we still need them. Either that or I
need to wake up more :-)

I cannot see how multi_cpu_stop() handles offline cpus, afaict it will
spin-wait for the other cpu to join its state indefinitely. So we need
to bail early if either CPU is unavailable.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ