[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFyVyKKNSSZJ6qZLA+RjMjtH1K9MZK+GRqUwy5CyC36xcQ@mail.gmail.com>
Date: Mon, 21 Sep 2015 13:39:48 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Davidlohr Bueso <dave@...olabs.net>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Will Deacon <will.deacon@....com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Davidlohr Bueso <dbueso@...e.de>
Subject: Re: [PATCH 4/5] locking/rwsem: Use acquire/release semantics
On Mon, Sep 21, 2015 at 1:17 PM, Davidlohr Bueso <dave@...olabs.net> wrote:
> @@ -114,7 +114,7 @@ static inline void __downgrade_write(struct rw_semaphore *sem)
> {
> long tmp;
>
> - tmp = atomic_long_add_return(-RWSEM_WAITING_BIAS,
> + tmp = atomic_long_add_return_acquire(-RWSEM_WAITING_BIAS,
> (atomic_long_t *)&sem->count);
> if (tmp < 0)
> rwsem_downgrade_wake(sem);
Careful. I'm pretty sure this is wrong.
When we downgrade exclusive ownership to non-exclusive, that should be
a *release* operation. Anything we did inside the write-locked region
had damn better _stay_ inside the write-locked region, we can not
allow it to escape down into the read-locked side. So it needs to be
at least a release.
In contrast, anything that we do in the read-locked part is fine to be
re-ordered into the write-locked exclusive part, so it does *not* need
acquire ordering (the original write locking obviously did use
acquire, and acts as a barrier for everything that comes in the locked
region).
I tried to look through everything, and I think this is the only thing
you got wrong, but I'd like somebody to double-checks. Getting the
acquire/release semantics wrong will cause some really really subtle
and hard-as-hell-to-find bugs. So let's be careful out there, ok?
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists