[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1263460096.4244.282.camel@laptop>
Date: Thu, 14 Jan 2010 10:08:16 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
Cc: linux-kernel@...r.kernel.org,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Steven Rostedt <rostedt@...dmis.org>,
Oleg Nesterov <oleg@...hat.com>, Ingo Molnar <mingo@...e.hu>,
akpm@...ux-foundation.org, josh@...htriplett.org,
tglx@...utronix.de, Valdis.Kletnieks@...edu, dhowells@...hat.com,
laijs@...fujitsu.com, dipankar@...ibm.com
Subject: Re: [RFC PATCH] introduce sys_membarrier(): process-wide memory
barrier (v5)
On Wed, 2010-01-13 at 14:36 -0500, Mathieu Desnoyers wrote:
> * Peter Zijlstra (peterz@...radead.org) wrote:
> > On Tue, 2010-01-12 at 20:37 -0500, Mathieu Desnoyers wrote:
> > > + for_each_cpu(cpu, tmpmask) {
> > > + spin_lock_irq(&cpu_rq(cpu)->lock);
> > > + mm = cpu_curr(cpu)->mm;
> > > + spin_unlock_irq(&cpu_rq(cpu)->lock);
> > > + if (current->mm != mm)
> > > + cpumask_clear_cpu(cpu, tmpmask);
> > > + }
> >
> > Why not:
> >
> > rcu_read_lock();
> > if (current->mm != cpu_curr(cpu)->mm)
> > cpumask_clear_cpu(cpu, tmpmask);
> > rcu_read_unlock();
> >
> > the RCU read lock ensures the task_struct obtained remains valid, and it
> > avoids taking the rq->lock.
> >
>
> If we go for a simple rcu_read_lock, I think that we need a smp_mb()
> after switch_to() updates the current task on the remote CPU, before it
> returns to user-space. Do we have this guarantee for all architectures ?
>
> So what I'm looking for, overall, is:
>
> schedule()
> ...
> switch_mm()
> smp_mb()
> clear mm_cpumask
> set mm_cpumask
> switch_to()
> update current task
> smp_mb()
>
> If we have that, then the rcu_read_lock should work.
>
> What the rq lock currently gives us is the guarantee that if the current
> thread changes on a remote CPU while we are not holding this lock, then
> a full scheduler execution is performed, which implies a memory barrier
> if we change the current thread (it does, right ?).
I'm not quite seeing it, we have 4 possibilities, switches between
threads with:
a) our mm, another mm
- if we observe the former, we'll send an IPI (redundant)
- if we observe the latter, the switch_mm will have issued an mb
b) another mm, our mm
- if we observe the former, we're good because the cpu didn't run our
thread when we called sys_membarrier()
- if we observe the latter, we'll send an IPI (redundant)
c) our mm, our mm
- no matter which task we observe, we'll match and send an IPI
d) another mm, another mm
- no matter which task we observe, we'll not match and not send an
IPI.
Or am I missing something?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists