lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151202161758.GS3816@twins.programming.kicks-ass.net>
Date:	Wed, 2 Dec 2015 17:17:58 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Frederic Weisbecker <fweisbec@...il.com>
Cc:	Chris Metcalf <cmetcalf@...hip.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Luiz Capitulino <lcapitulino@...hat.com>,
	Christoph Lameter <cl@...ux.com>,
	Ingo Molnar <mingo@...nel.org>,
	Viresh Kumar <viresh.kumar@...aro.org>,
	Rik van Riel <riel@...hat.com>
Subject: Re: [PATCH 3/7] perf: Migrate perf to use new tick dependency mask
 model

On Wed, Nov 25, 2015 at 01:34:30PM +0100, Frederic Weisbecker wrote:
> On Tue, Nov 24, 2015 at 11:19:33AM -0500, Chris Metcalf wrote:

> > It would be helpful to have a comment explaining why these two
> > can't race with each other, e.g. this race:
> > 
> > [cpu 1]  atomic_dec_and_test
> > [cpu 2] atomic_inc_return
> > [cpu 2] tick_nohz_set_dep()
> > [cpu 1] tick_nohz_clear_dep()
> > 
> > Or perhaps this is a true race condition possibility?
> > 
> > I think we're OK for the sched cases since they're protected under
> > the rq lock, I think.  I'm not sure about the POSIX cpu timers.
> 
> Hmm, how did I miss that...
> 
> So in the case of perf, either we need locking, in which case we may want
> to use something like tick_nohz_add_dep() which takes care of counting.
> But perf would be the only user.

Right; so you could use something like atomic_dec_and_mutex_lock(), that
would only globally serialize the 0<->1 transitions without incurring
overhead to any other state transitions.

A possibly even less expensive option might be something along the lines
of:

tick_nohz_update_perf_dep()
{
	static DEFINE_SPINLOCK(lock);
	bool freq;

	spin_lock(&lock);
	freq = !!atomic_read(&nr_freq_events);
	if (freq ^ !!tick_nohz_test_dep(PERF)) {
		if (freq)
			tick_nohz_set_dep(PERF);
		else
			tick_nohz_clear_dep(PERF);
	}
	spin_unlock(&lock);
}


	if (atomic_inc_return(&nr_freq_events) == 1)
		tick_nohz_update_perf_dep();


	if (atomic_dec_return(&nr_freq_events) == 0)
		tick_nohz_update_perf_dep();


That avoids the atomic_add_unless() muckery.

> _ sched_clock_stable: that one is more obscure. It seems that set_sched_clock_stable()
>   and clear_sched_clock_stable() can race on static keys if running concurrently, and
>   that would concern tick mask as well.

All you need to ensure here is that clear wins. Once we clear
sched_clock_stable we should _never_ set it again.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ