[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1577189631.3966.1516129345161.JavaMail.zimbra@efficios.com>
Date: Tue, 16 Jan 2018 19:02:25 +0000 (UTC)
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
linux-api <linux-api@...r.kernel.org>,
Andy Lutomirski <luto@...nel.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Boqun Feng <boqun.feng@...il.com>,
Andrew Hunter <ahh@...gle.com>,
maged michael <maged.michael@...il.com>,
Avi Kivity <avi@...lladb.com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Michael Ellerman <mpe@...erman.id.au>,
Dave Watson <davejwatson@...com>,
"H. Peter Anvin" <hpa@...or.com>,
Andrea Parri <parri.andrea@...il.com>,
"Russell King, ARM Linux" <linux@...linux.org.uk>,
Greg Hackmann <ghackmann@...gle.com>,
Will Deacon <will.deacon@....com>,
David Sehr <sehr@...gle.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
x86 <x86@...nel.org>
Subject: Re: [PATCH for 4.16 04/10] membarrier: provide SHARED_EXPEDITED
command (v2)
----- On Jan 16, 2018, at 1:20 PM, Thomas Gleixner tglx@...utronix.de wrote:
> On Mon, 15 Jan 2018, Mathieu Desnoyers wrote:
>> +static int membarrier_shared_expedited(void)
>> +{
>> + int cpu;
>> + bool fallback = false;
>> + cpumask_var_t tmpmask;
>> +
>> + if (num_online_cpus() == 1)
>> + return 0;
>> +
>> + /*
>> + * Matches memory barriers around rq->curr modification in
>> + * scheduler.
>> + */
>> + smp_mb(); /* system call entry is not a mb. */
>> +
>> + /*
>> + * Expedited membarrier commands guarantee that they won't
>> + * block, hence the GFP_NOWAIT allocation flag and fallback
>> + * implementation.
>> + */
>> + if (!zalloc_cpumask_var(&tmpmask, GFP_NOWAIT)) {
>> + /* Fallback for OOM. */
>> + fallback = true;
>> + }
>> +
>> + cpus_read_lock();
>> + for_each_online_cpu(cpu) {
>> + struct task_struct *p;
>> +
>> + /*
>> + * Skipping the current CPU is OK even through we can be
>> + * migrated at any point. The current CPU, at the point
>> + * where we read raw_smp_processor_id(), is ensured to
>> + * be in program order with respect to the caller
>> + * thread. Therefore, we can skip this CPU from the
>> + * iteration.
>> + */
>> + if (cpu == raw_smp_processor_id())
>> + continue;
>> + rcu_read_lock();
>> + p = task_rcu_dereference(&cpu_rq(cpu)->curr);
>> + if (p && p->mm && (atomic_read(&p->mm->membarrier_state) &
>> + MEMBARRIER_STATE_SHARED_EXPEDITED)) {
>
> This does not make sense vs. the documentation:
>
>> + * @MEMBARRIER_CMD_SHARED_EXPEDITED:
>> + * Execute a memory barrier on all running threads
>> + * part of a process which previously registered
>> + * with MEMBARRIER_CMD_REGISTER_SHARED_EXPEDITED.
>
> This should say:
>
>> + * Execute a memory barrier on all running threads
>> + * of all processes which previously registered
>> + * with MEMBARRIER_CMD_REGISTER_SHARED_EXPEDITED.
Good point, will fix.
>
> And I really have to ask whether this should be named _GLOBAL_ instead of
> _SHARED_.
>
> Hmm?
I agree with you that this behavior fits better a "global" definition
than a "shared" one, especially given that it does not target a specific
shared memory mapping. The main issue I have is due to the pre-existing
MEMBARRIER_CMD_SHARED introduced in Linux 4.3. That one should also have
been called "MEMBARRIER_CMD_GLOBAL" based on the current line of thoughts.
Do you envision a way to transition forward to a new "MEMBARRIER_CMD_GLOBAL" for
the currently existing MEMBARRIER_CMD_SHARED ?
Perhaps with a duplicated enum entry ?
enum membarrier_cmd {
MEMBARRIER_CMD_QUERY = 0,
MEMBARRIER_CMD_SHARED = (1 << 0), /* use MEMBARRIER_CMD_GLOBAL instead */
MEMBARRIER_CMD_GLOBAL = (1 << 0),
[...]
};
Thanks,
Mathieu
>
> Thanks,
>
> tglx
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
Powered by blists - more mailing lists