lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1265041477.29013.54.camel@gandalf.stny.rr.com>
Date:	Mon, 01 Feb 2010 11:24:37 -0500
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	akpm@...ux-foundation.org, Ingo Molnar <mingo@...e.hu>,
	linux-kernel@...r.kernel.org,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Nicholas Miell <nmiell@...cast.net>, laijs@...fujitsu.com,
	dipankar@...ibm.com, josh@...htriplett.org, dvhltc@...ibm.com,
	niv@...ibm.com, tglx@...utronix.de, peterz@...radead.org,
	Valdis.Kletnieks@...edu, dhowells@...hat.com
Subject: Re: [patch 2/3] scheduler: add full memory barriers upon task
 switch at runqueue lock/unlock

On Mon, 2010-02-01 at 11:09 -0500, Mathieu Desnoyers wrote:

> We can deal with the rq->cur update by holding the rq lock in each
> iteration of the for_each_cpu(cpu, mm_cpumask(current->mm)) loop. This
> ensures that if rq->cur is updated, we have an associated memory barrier
> issued (e.g. on x86, implied by writing to cr3 while the rq lock is held).
> 
> However, this does not deal with mm_cpumask update, and we cannot use
> the per-cpu rq lock, as it's a process-wide data structure updated with
> clear_bit/set_bit in switch_mm(). So at the very least, we would have to
> add memory barriers in switch_mm() on some architectures to deal with
> this.
> 


Doesn't set_bit imply a wmb()? If so couldn't we do:

What about:

again:
  tmp_mask = mm_cpumask(current->mm);
  smp_mb();
   rcu_read_lock(); /* ensures validity of cpu_curr(cpu) tasks */
   for_each_cpu(cpu, tmp_mask) {
     spin_lock_irq(&cpu_rq(cpu)->lock);
     ret = current->mm == cpu_curr(cpu)->mm;
     spin_unlock_irq(&cpu_rq(cpu)->lock);
     if (ret)
       smp_call_function_single(cpu, membarrier_ipi, NULL, 1);
   }
   rcu_read_unlock();
   smp_mb();
   if (tmp_mask != mm_cpumask(current->mm)) {
      /* do check for signals here */
      goto again;
   }

Would the above work?

-- Steve


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ