[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=whA4dDdp+KT_ZWnKr5ERqhUtsf3wRTh7HL1Dcg0vGYV_g@mail.gmail.com>
Date: Wed, 24 Apr 2019 10:56:04 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Waiman Long <longman@...hat.com>, Ingo Molnar <mingo@...hat.com>,
Will Deacon <will.deacon@....com>,
Thomas Gleixner <tglx@...utronix.de>,
Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
"the arch/x86 maintainers" <x86@...nel.org>,
Davidlohr Bueso <dave@...olabs.net>,
Tim Chen <tim.c.chen@...ux.intel.com>,
huang ying <huang.ying.caritas@...il.com>
Subject: Re: [PATCH v4 14/16] locking/rwsem: Guard against making count negative
On Wed, Apr 24, 2019 at 10:02 AM Peter Zijlstra <peterz@...radead.org> wrote:
>
> > For an uncontended rwsem, this offers no real benefit. Adding
> > preempt_disable() is more complicated than I originally thought.
>
> I'm not sure I get your objection?
I'm not sure it's an objection, but I do think that it's sad if we
have to do the preempt_enable/disable around the fastpath.
Is the *only* reason for the preempt-disable to avoid the (very
unlikely) case of unbounded preemption in between the "increment
reader counts" and "decrement it again because we noticed it turned
negative"?
If that's the only reason, then I think we should just accept the
race. You still have a "slop" of 15 bits (so 16k processes) hitting
the same mutex, and they'd all have to be preempted in that same
"small handful instructions" window.
Even if the likelihood of *one* process hitting that race is 90% (and
no, it really isn't), then the likelihood of having 16k processes
hitting that race is 0.9**16384.
We call numbers like that "we'll hit it some time long after the heat
death of the universe" numbers.
Linus
Powered by blists - more mailing lists