lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 28 Jul 2017 10:31:23 -0700
From:   "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:     Andrew Hunter <ahh@...gle.com>
Cc:     Avi Kivity <avi@...lladb.com>,
        Maged Michael <maged.michael@...il.com>,
        Geoffrey Romer <gromer@...gle.com>,
        lkml <linux-kernel@...r.kernel.org>,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Subject: Re: Udpated sys_membarrier() speedup patch, FYI

On Fri, Jul 28, 2017 at 10:15:49AM -0700, Andrew Hunter wrote:
> On Thu, Jul 27, 2017 at 12:43 PM, Paul E. McKenney
> <paulmck@...ux.vnet.ibm.com> wrote:
> > On Thu, Jul 27, 2017 at 10:20:14PM +0300, Avi Kivity wrote:
> >> IPIing only running threads of my process would be perfect. In fact
> >> I might even be able to make use of "membarrier these threads
> >> please" to reduce IPIs, when I change the topology from fully
> >> connected to something more sparse, on larger machines.
> 
> We do this as well--sometimes we only need RSEQ fences against
> specific CPU(s), and thus pass a subset.

Sounds like a good future enhancement, probably requiring a new syscall
to accommodate the cpumask.

> > +static void membarrier_private_expedited_ipi_each(void)
> > +{
> > +       int cpu;
> > +
> > +       for_each_online_cpu(cpu) {
> > +               struct task_struct *p;
> > +
> > +               rcu_read_lock();
> > +               p = task_rcu_dereference(&cpu_rq(cpu)->curr);
> > +               if (p && p->mm == current->mm)
> > +                       smp_call_function_single(cpu, ipi_mb, NULL, 1);
> > +               rcu_read_unlock();
> > +       }
> > +}
> > +
> 
> We have the (simpler imho)
> 
> const struct cpumask *mask = mm_cpumask(mm);
> /* possibly AND it with a user requested mask */
> smp_call_function_many(mask, ipi_func, ....);
> 
> which I think will be faster on some archs (that support broadcast)
> and have fewer problems with out of sync values (though we do have to
> check in our IPI function that we haven't context switched out.
> 
> Am I missing why this won't work?

My impression is that some architectures don't provide the needed
ordering in this case, and also that some architectures support ASIDs
and would thus IPI CPUs that weren't actually running threads in the
process at the current time.

Mathieu, anything I am missing?

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ