lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 2 Dec 2015 18:03:14 +0100
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Chris Metcalf <cmetcalf@...hip.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Luiz Capitulino <lcapitulino@...hat.com>,
	Christoph Lameter <cl@...ux.com>,
	Ingo Molnar <mingo@...nel.org>,
	Viresh Kumar <viresh.kumar@...aro.org>,
	Rik van Riel <riel@...hat.com>
Subject: Re: [PATCH 3/7] perf: Migrate perf to use new tick dependency mask
 model

On Wed, Dec 02, 2015 at 05:17:58PM +0100, Peter Zijlstra wrote:
> On Wed, Nov 25, 2015 at 01:34:30PM +0100, Frederic Weisbecker wrote:
> > On Tue, Nov 24, 2015 at 11:19:33AM -0500, Chris Metcalf wrote:
> 
> > > It would be helpful to have a comment explaining why these two
> > > can't race with each other, e.g. this race:
> > > 
> > > [cpu 1]  atomic_dec_and_test
> > > [cpu 2] atomic_inc_return
> > > [cpu 2] tick_nohz_set_dep()
> > > [cpu 1] tick_nohz_clear_dep()
> > > 
> > > Or perhaps this is a true race condition possibility?
> > > 
> > > I think we're OK for the sched cases since they're protected under
> > > the rq lock, I think.  I'm not sure about the POSIX cpu timers.
> > 
> > Hmm, how did I miss that...
> > 
> > So in the case of perf, either we need locking, in which case we may want
> > to use something like tick_nohz_add_dep() which takes care of counting.
> > But perf would be the only user.
> 
> Right; so you could use something like atomic_dec_and_mutex_lock(), that
> would only globally serialize the 0<->1 transitions without incurring
> overhead to any other state transitions.
> 
> A possibly even less expensive option might be something along the lines
> of:
> 
> tick_nohz_update_perf_dep()
> {
> 	static DEFINE_SPINLOCK(lock);
> 	bool freq;
> 
> 	spin_lock(&lock);
> 	freq = !!atomic_read(&nr_freq_events);
> 	if (freq ^ !!tick_nohz_test_dep(PERF)) {
> 		if (freq)
> 			tick_nohz_set_dep(PERF);
> 		else
> 			tick_nohz_clear_dep(PERF);
> 	}
> 	spin_unlock(&lock);
> }

Well, doing the inc/dec inside the lock is easier to read :-)

> 
> 
> 	if (atomic_inc_return(&nr_freq_events) == 1)
> 		tick_nohz_update_perf_dep();
> 
> 
> 	if (atomic_dec_return(&nr_freq_events) == 0)
> 		tick_nohz_update_perf_dep();
> 
> 
> That avoids the atomic_add_unless() muckery.

Right, I can do either that or I can move the dependency to the CPU level
and count nr_freq to the cpu_ctx when any ctx gets scheduled in/out. Then
everytime we inc and nr_freq == 1, we set the dependency (all that should
be serialized as it only happens locally).

Which way do you prefer? Arguably, the global dependency approach is probably more
simple (although the above based on _test() and atomic_read() is tricky) and the CPU
dependency is more fine-grained (and we avoid the above tricks). Not sure we need it to
be fine-grained in this level though.

> 
> > _ sched_clock_stable: that one is more obscure. It seems that set_sched_clock_stable()
> >   and clear_sched_clock_stable() can race on static keys if running concurrently, and
> >   that would concern tick mask as well.
> 
> All you need to ensure here is that clear wins. Once we clear
> sched_clock_stable we should _never_ set it again.

Ok, so we better make sure that such sequence:

     set_sched_clock_stable();
     clear_sched_clock_stable();

...is always serialized by callers somewhow. If they can't happen in parallel we are
fine because the set is sequential whereas the clear does the static key change asynchronously.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ