[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1463521395.16945.1503889546934.JavaMail.zimbra@efficios.com>
Date: Mon, 28 Aug 2017 03:05:46 +0000 (UTC)
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: Andy Lutomirski <luto@...capital.net>
Cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Boqun Feng <boqun.feng@...il.com>,
Andrew Hunter <ahh@...gle.com>,
maged michael <maged.michael@...il.com>,
gromer <gromer@...gle.com>, Avi Kivity <avi@...lladb.com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Michael Ellerman <mpe@...erman.id.au>,
Dave Watson <davejwatson@...com>,
Andy Lutomirski <luto@...nel.org>,
Will Deacon <will.deacon@....com>,
Hans Boehm <hboehm@...gle.com>
Subject: Re: [PATCH v2] membarrier: provide register sync core cmd
----- On Aug 27, 2017, at 3:53 PM, Andy Lutomirski luto@...capital.net wrote:
>> On Aug 27, 2017, at 1:50 PM, Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
>> wrote:
>>
>> Add a new MEMBARRIER_CMD_REGISTER_SYNC_CORE command to the membarrier
>> system call. It allows processes to register their intent to have their
>> threads issue core serializing barriers in addition to memory barriers
>> whenever a membarrier command is performed.
>>
>
> Why is this stateful? That is, why not just have a new membarrier command to
> sync every thread's icache?
If we'd do it on every CPU icache, it would be as trivial as you say. The
concern here is sending IPIs only to CPUs running threads that belong to the
same process, so we don't disturb unrelated processes.
If we could just grab each CPU's runqueue lock, it would be fairly simple
to do. But we want to avoid hitting each runqueue with exclusive atomic
access associated with grabbing the lock. (cache-line bouncing)
So, the "private" membarrier command end up reading the rq->curr->mm pointer
value for each runqueue, and compare them to its own current->mm value.
However, this means that whenever we skip a CPU, we're not sending an
IPI to that CPU. So we rely on the scheduler for providing the required
full barrier either before storing to rq->curr, after user-space memory
accesses performed by "prev", as well as after storing to rq->curr,
before user-space memory accesses performed by "next".
The IPI of the private membarrier can issue issue both smp_mb()
and sync_core() (that's what my implementation does).
However, having sys_membarrier issue core serializing barriers adds
extra constraints on entry into the scheduler/resuming to user-space.
It's not sufficient to order user-space memory accesses wrt storing
to rq->curr; we also want to serialize the core execution. This
is why I'm adding sync_core before the full barrier on entry, and
sync_core after the full barrier on exit. Arguably, some architectures
may not need the extra sync_core on exit (e.g. x86 has iret which
implies core serialization), there are cases where it's not guaranteed
(AFAIK sysexit), and it's rarely guaranteed on entry.
So, we end up with the possibility of adding the core serialization
unconditionally on entry and exit of scheduler. However, as my numbers
below show, performance is slightly impacted in heavy benchmarks.
Therefore, I propose to make processes register their intent to
have the scheduler issue core serializing barriers on their behalf
when it schedules them out/in.
>> * Scheduler Overhead Benchmarks
>>
>> Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz
>> taskset 01 ./perf bench sched pipe -T
>> Linux v4.13-rc6
>>
>> Avg. usecs/op Std.Dev. usecs/op
>> Before this change: 2.75 0.12
>> Non-registered processes: 2.73 0.08
>> Registered processes: 3.07 0.02
>>
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
Powered by blists - more mailing lists