[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <814426779.34655.1426704641015.JavaMail.zimbra@efficios.com>
Date: Wed, 18 Mar 2015 18:50:41 +0000 (UTC)
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: josh@...htriplett.org
Cc: linux-kernel@...r.kernel.org,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Steven Rostedt <rostedt@...dmis.org>,
Nicholas Miell <nmiell@...cast.net>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...hat.com>,
Alan Cox <gnomes@...rguk.ukuu.org.uk>,
Lai Jiangshan <laijs@...fujitsu.com>,
Stephen Hemminger <stephen@...workplumber.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
David Howells <dhowells@...hat.com>
Subject: Re: [RFC PATCH v14] sys_membarrier(): system/process-wide memory
barrier (x86)
----- Original Message -----
> On Wed, Mar 18, 2015 at 04:52:14PM +0000, Mathieu Desnoyers wrote:
> > ----- Original Message -----
> > > On Wed, Mar 18, 2015 at 12:23:02PM -0400, Mathieu Desnoyers wrote:
> > > > memory barriers in reader: 1701557485 reads, 3129842 writes
> > > > signal-based scheme: 9825306874 reads, 5386 writes
> > > > sys_membarrier: 7992076602 reads, 220 writes
> > > >
> > > > The dynamic sys_membarrier availability check adds some overhead to
> > > > the read-side compared to the signal-based scheme, but besides that,
> > > > with the expedited scheme, we can see that we are close to the
> > > > read-side
> > > > performance of the signal-based scheme. However, this non-expedited
> > > > sys_membarrier implementation has a much slower grace period than
> > > > signal
> > > > and memory barrier schemes.
> > >
> > > Doesn't the query flag allow you to find out in advance rather than
> > > dynamically within the reader? What's the reader performance if you
> > > hardcode availability of membarrier?
> >
> > What I am currently doing is to use sys_membarrier with a query
> > flag within a lib constructor, and cache the result in a global
> > variable. In the reader, I just test the variable, and thus detect
> > whether I can use sys_membarrier, or if I need to fallback to
> > barriers on both reader and writer.
> >
> > Are you suggesting I try removing the global variable load+test
> > from the reader fast path ?
>
> Right. You said that "The dynamic sys_membarrier availability check
> adds some overhead to the read-side compared to the signal-based
> scheme"; I wondered how much.
With 8 reader threads in parallel, no writer (workload found
in userspace RCU tests/benchmark/test_urcu*.c):
* memory barriers in read-side
307.4 million reads/s
* sys_membarrier read-side
With dynamic check: 1142.0 million reads/s
Hardcoded barrier(): 1453.2 million reads/s (For a 27% speedup over dynamic check.)
* QSBR (quiescent-state based) read-side
2276.9 million reads/s
It might start being worthwhile to consider turning memory barriers
into no-op within lib constructors at some point. Remember that
rcu_read_lock/unlock can be inlined into applications, which may
add to the challenge.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists