[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190418144036.GE12232@hirez.programming.kicks-ass.net>
Date: Thu, 18 Apr 2019 16:40:36 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Waiman Long <longman@...hat.com>
Cc: Ingo Molnar <mingo@...hat.com>, Will Deacon <will.deacon@....com>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, x86@...nel.org,
Davidlohr Bueso <dave@...olabs.net>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Tim Chen <tim.c.chen@...ux.intel.com>,
huang ying <huang.ying.caritas@...il.com>
Subject: Re: [PATCH v4 14/16] locking/rwsem: Guard against making count
negative
On Thu, Apr 18, 2019 at 10:08:28AM -0400, Waiman Long wrote:
> On 04/18/2019 09:51 AM, Peter Zijlstra wrote:
> > On Sat, Apr 13, 2019 at 01:22:57PM -0400, Waiman Long wrote:
> >> inline void __down_read(struct rw_semaphore *sem)
> >> {
> >> + long count = atomic_long_fetch_add_acquire(RWSEM_READER_BIAS,
> >> + &sem->count);
> >> +
> >> + if (unlikely(count & RWSEM_READ_FAILED_MASK)) {
> >> + rwsem_down_read_failed(sem, count);
> >> DEBUG_RWSEMS_WARN_ON(!is_rwsem_reader_owned(sem), sem);
> >> } else {
> >> rwsem_set_reader_owned(sem);
> > *groan*, that is not provably correct. It is entirely possible to get
> > enough fetch_add()s piled on top of one another to overflow regardless.
> >
> > Unlikely, yes, impossible, no.
> >
> > This makes me nervious as heck, I really don't want to ever have to
> > debug something like that :-(
>
> The number of fetch_add() that can pile up is limited by the number of
> CPUs available in the system.
> Yes, if you have a 32k processor system that have all the CPUs trying
> to acquire the same read-lock, we will have a problem.
Having more CPUs than that is not impossible these days.
> Or as Linus had said that if we could have tasks kept
> preempted right after doing the fetch_add with newly scheduled tasks
> doing the fetch_add at the same lock again, we could have overflow with
> less CPUs.
That.
> How about disabling preemption before fetch_all and re-enable
> it afterward to address the latter concern?
Performance might be an issue, look at what preempt_disable() +
preempt_enable() generate for ARM64 for example. That's not particularly
pretty.
> I have no solution for the first case, though.
A cmpxchg() loop can fix this, but that again has performance
implications like you mentioned a while back.
Powered by blists - more mailing lists