[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130210195042.GA6236@redhat.com>
Date: Sun, 10 Feb 2013 20:50:42 +0100
From: Oleg Nesterov <oleg@...hat.com>
To: "Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>
Cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
tglx@...utronix.de, peterz@...radead.org, tj@...nel.org,
rusty@...tcorp.com.au, mingo@...nel.org, akpm@...ux-foundation.org,
namhyung@...nel.org, rostedt@...dmis.org,
wangyun@...ux.vnet.ibm.com, xiaoguangrong@...ux.vnet.ibm.com,
rjw@...k.pl, sbw@....edu, fweisbec@...il.com,
linux@....linux.org.uk, nikunj@...ux.vnet.ibm.com,
linux-pm@...r.kernel.org, linux-arch@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org,
linuxppc-dev@...ts.ozlabs.org, netdev@...r.kernel.org,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 04/45] percpu_rwlock: Implement the core design of
Per-CPU Reader-Writer Locks
On 02/11, Srivatsa S. Bhat wrote:
>
> On 02/10/2013 11:36 PM, Oleg Nesterov wrote:
> >>> +static void announce_writer_inactive(struct percpu_rwlock *pcpu_rwlock)
> >>> +{
> >>> + unsigned int cpu;
> >>> +
> >>> + drop_writer_signal(pcpu_rwlock, smp_processor_id());
> >>
> >> Why do we drop ourselves twice? More to the point, why is it important to
> >> drop ourselves first?
> >
> > And don't we need mb() _before_ we clear ->writer_signal ?
> >
>
> Oh, right! Or, how about moving announce_writer_inactive() to _after_
> write_unlock()?
Not sure this will help... but, either way it seems we have another
problem...
percpu_rwlock tries to be "generic". This means we should "ignore" its
usage in hotplug, and _write_lock should not race with _write_unlock.
IOW. Suppose that _write_unlock clears ->writer_signal. We need to ensure
that this can't race with another write which wants to set this flag.
Perhaps it should be counter as well, and it should be protected by
the same ->global_rwlock, but _write_lock() should drop it before
sync_all_readers() and then take it again?
> >>> +static inline void sync_reader(struct percpu_rwlock *pcpu_rwlock,
> >>> + unsigned int cpu)
> >>> +{
> >>> + smp_rmb(); /* Paired with smp_[w]mb() in percpu_read_[un]lock() */
> >>
> >> As I understand it, the purpose of this memory barrier is to ensure
> >> that the stores in drop_writer_signal() happen before the reads from
> >> ->reader_refcnt in reader_uses_percpu_refcnt(), thus preventing the
> >> race between a new reader attempting to use the fastpath and this writer
> >> acquiring the lock. Unless I am confused, this must be smp_mb() rather
> >> than smp_rmb().
> >
> > And note that before sync_reader() we call announce_writer_active() which
> > already adds mb() before sync_all_readers/sync_reader, so this rmb() looks
> > unneeded.
> >
>
> My intention was to help the writer see the ->reader_refcnt drop to zero
> ASAP; hence I used smp_wmb() at reader and smp_rmb() here at the writer.
Hmm, interesting... Not sure, but can't really comment. However I can
answer your next question:
> Please correct me if my understanding of memory barriers is wrong here..
Who? Me??? No we have paulmck for that ;)
Oleg.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists