[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121024184311.GA5025@redhat.com>
Date: Wed, 24 Oct 2012 20:43:11 +0200
From: Oleg Nesterov <oleg@...hat.com>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Mikulas Patocka <mpatocka@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <peterz@...radead.org>,
Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
Ananth N Mavinakayanahalli <ananth@...ibm.com>,
Anton Arapov <anton@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] percpu-rw-semaphores: use rcu_read_lock_sched
On 10/24, Paul E. McKenney wrote:
>
> On Wed, Oct 24, 2012 at 07:18:55PM +0200, Oleg Nesterov wrote:
> > On 10/24, Paul E. McKenney wrote:
> > >
> > > static inline void percpu_up_read(struct percpu_rw_semaphore *p)
> > > {
> > > /*
> > > * Decrement our count, but protected by RCU-sched so that
> > > * the writer can force proper serialization.
> > > */
> > > rcu_read_lock_sched();
> > > this_cpu_dec(*p->counters);
> > > rcu_read_unlock_sched();
> > > }
> >
> > Yes, the explicit lock/unlock makes the new assumptions about
> > synchronize_sched && barriers unnecessary. And iiuc this could
> > even written as
> >
> > rcu_read_lock_sched();
> > rcu_read_unlock_sched();
> >
> > this_cpu_dec(*p->counters);
>
> But this would lose the memory barrier that is inserted by
> synchronize_sched() after the CPU's last RCU-sched read-side critical
> section.
How? Afaics there is no need to synchronize with this_cpu_dec(), its
result was already seen before the 2nd synchronize_sched() was called
in percpu_down_write().
IOW, this memory barrier is only needed to synchronize with memory
changes inside down_read/up_read.
To clarify, of course I do not suggest to write is this way. I am just
trying to check my understanding.
Oleg.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists