lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 8 Aug 2018 07:42:00 -0700
From:   "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:     Steven Rostedt <rostedt@...dmis.org>
Cc:     Joel Fernandes <joelaf@...gle.com>,
        Joel Fernandes <joel@...lfernandes.org>,
        LKML <linux-kernel@...r.kernel.org>,
        "Cc: Android Kernel" <kernel-team@...roid.com>,
        Boqun Feng <boqun.feng@...il.com>,
        Byungchul Park <byungchul.park@....com>,
        Ingo Molnar <mingo@...hat.com>,
        Masami Hiramatsu <mhiramat@...nel.org>,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        Namhyung Kim <namhyung@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Thomas Glexiner <tglx@...utronix.de>,
        Tom Zanussi <tom.zanussi@...ux.intel.com>
Subject: Re: [PATCH v12 3/3] tracing: Centralize preemptirq tracepoints and
 unify their usage

On Wed, Aug 08, 2018 at 10:27:00AM -0400, Steven Rostedt wrote:
> On Wed, 8 Aug 2018 06:00:41 -0700
> "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> wrote:
> > 
> > I suppose that an srcu_read_lock_nmi() and srcu_read_unlock_nmi() could
> > be added, which would do atomic ops on sp->sda->srcu_lock_count.  Not sure
> > whether this would be fast enough to be useful, but easy to provide:
> > 
> > int __srcu_read_lock_nmi(struct srcu_struct *sp)  /* UNTESTED. */
> > {
> > 	int idx;
> > 
> > 	idx = READ_ONCE(sp->srcu_idx) & 0x1;
> > 	atomic_inc(&sp->sda->srcu_lock_count[idx]);
> > 	smp_mb__after_atomic(); /* B */  /* Avoid leaking critical section. */
> > 	return idx;
> > }
> > 
> > void __srcu_read_unlock_nmi(struct srcu_struct *sp, int idx)
> > {
> > 	smp_mb__before_atomic(); /* C */  /* Avoid leaking critical section. */
> > 	atomic_inc(&sp->sda->srcu_unlock_count[idx]);
> > }
> > 
> > With appropriate adjustments to also allow Tiny RCU to also work.
> > 
> > Note that you have to use _nmi() everywhere, not just in NMI handlers.
> > In fact, the NMI handlers are the one place you -don't- need to use
> > _nmi(), strangely enough.
> > 
> > Might be worth a try -- smp_mb__{before,after}_atomic() is a no-op on
> > some architectures, for example.
> 
> Note this would kill the performance that srcu gives that Joel wants.
> Switching from a this_cpu_inc() to a atomic_inc() would be a huge
> impact.

I don't know how huge it would be, but that concern is exactly why I am
proposing adding _nmi() interfaces rather than just directly changing
the stock __srcu_read_lock() and __srcu_read_unlock() functions.

> There's also a local_inc() if you are using per cpu pointers, that is
> suppose to guarantee atomicity for single cpu operations. That's what
> the ftrace ring buffer uses.

Good point, that becomes atomic_long_inc() or equivalent on most
architectures, but an incl instruction (not locked) on x86.  So updating
my earlier still-untested thought:

int __srcu_read_lock_nmi(struct srcu_struct *sp)  /* UNTESTED. */
{
	int idx;

	idx = READ_ONCE(sp->srcu_idx) & 0x1;
	local_inc(&sp->sda->srcu_lock_count[idx]);
	smp_mb__after_atomic(); /* B */  /* Avoid leaking critical section. */
	return idx;
}

void __srcu_read_unlock_nmi(struct srcu_struct *sp, int idx)
{
	smp_mb__before_atomic(); /* C */  /* Avoid leaking critical section. */
	local_inc(&sp->sda->srcu_unlock_count[idx]);
}

Would that work, or is there a better way to handle this?

								Thanx, Paul

Powered by blists - more mailing lists