lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100107173118.GG6764@linux.vnet.ibm.com>
Date:	Thu, 7 Jan 2010 09:31:18 -0800
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Josh Triplett <josh@...htriplett.org>,
	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>,
	Steven Rostedt <rostedt@...dmis.org>,
	linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
	akpm@...ux-foundation.org, tglx@...utronix.de,
	Valdis.Kletnieks@...edu, dhowells@...hat.com, laijs@...fujitsu.com,
	dipankar@...ibm.com
Subject: Re: [RFC PATCH] introduce sys_membarrier(): process-wide memory
	barrier

On Thu, Jan 07, 2010 at 06:18:36PM +0100, Peter Zijlstra wrote:
> On Thu, 2010-01-07 at 08:52 -0800, Paul E. McKenney wrote:
> > On Thu, Jan 07, 2010 at 09:44:15AM +0100, Peter Zijlstra wrote:
> > > On Wed, 2010-01-06 at 22:35 -0800, Josh Triplett wrote:
> > > > 
> > > > The number of threads doesn't matter nearly as much as the number of
> > > > threads typically running at a time compared to the number of
> > > > processors.  Of course, we can't measure that as easily, but I don't
> > > > know that your proposed heuristic would approximate it well.
> > > 
> > > Quite agreed, and not disturbing RT tasks is even more important.
> > 
> > OK, so I stand un-Reviewed-by twice in one morning.  ;-)
> > 
> > > A simple:
> > > 
> > >   for_each_cpu(cpu, current->mm->cpu_vm_mask) {
> > >      if (cpu_curr(cpu)->mm == current->mm)
> > >         smp_call_function_single(cpu, func, NULL, 1);
> > >   }
> > > 
> > > seems far preferable over anything else, if you really want you can use
> > > a cpumask to copy cpu_vm_mask in and unset bits and use the mask with
> > > smp_call_function_any(), but that includes having to allocate the
> > > cpumask, which might or might not be too expensive for Mathieu.
> > 
> > This would be vulnerable to the sys_membarrier() CPU seeing an old value
> > of cpu_curr(cpu)->mm, and that other task seeing the old value of the
> > pointer we are trying to RCU-destroy, right?
> 
> Right, so I was thinking that since you want a mb to be executed when
> calling sys_membarrier(). If you observe a matching ->mm but the cpu has
> since scheduled, we're good since it scheduled (but we'll still send the
> IPI anyway), if we do not observe it because the task gets scheduled in
> after we do the iteration we're still good because it scheduled.

Something like the following for sys_membarrier(), then?

  smp_mb();
  for_each_cpu(cpu, current->mm->cpu_vm_mask) {
     if (cpu_curr(cpu)->mm == current->mm)
        smp_call_function_single(cpu, func, NULL, 1);
  }

Then the code changing ->mm on the other CPU also needs to have a
full smp_mb() somewhere after the change to ->mm, but before starting
user-space execution.  Which it might well just due to overhead, but
we need to make sure that someone doesn't optimize us out of existence.

							Thanx, Paul

> As to needing to keep rcu_read_lock() around the iteration, for sure we
> need that to ensure the remote task_struct reference we take is valid.
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ