[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190611131359.GH3402@hirez.programming.kicks-ass.net>
Date: Tue, 11 Jun 2019 15:13:59 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Waiman Long <longman@...hat.com>
Cc: Ingo Molnar <mingo@...hat.com>, Will Deacon <will.deacon@....com>,
Thomas Gleixner <tglx@...utronix.de>,
Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>, linux-kernel@...r.kernel.org,
x86@...nel.org, Davidlohr Bueso <dave@...olabs.net>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Tim Chen <tim.c.chen@...ux.intel.com>,
huang ying <huang.ying.caritas@...il.com>
Subject: Re: [PATCH v8 16/19] locking/rwsem: Guard against making count
negative
On Mon, May 20, 2019 at 04:59:15PM -0400, Waiman Long wrote:
> static struct rw_semaphore __sched *
> +rwsem_down_read_slowpath(struct rw_semaphore *sem, int state, long adjustment)
> {
> + long count;
> bool wake = false;
> struct rwsem_waiter waiter;
> DEFINE_WAKE_Q(wake_q);
>
> + if (unlikely(!adjustment)) {
> + /*
> + * This shouldn't happen. If it does, there is probably
> + * something wrong in the system.
> + */
> + WARN_ON_ONCE(1);
if (WARN_ON_ONCE(!adjustment)) {
> +
> + /*
> + * An adjustment of 0 means that there are too many readers
> + * holding or trying to acquire the lock. So disable
> + * optimistic spinning and go directly into the wait list.
> + */
> + if (rwsem_test_oflags(sem, RWSEM_RD_NONSPINNABLE))
> + rwsem_set_nonspinnable(sem);
ISTR rwsem_set_nonspinnable() already does that test, so no need to do
it again, right?
> + goto queue;
> + }
> +
> /*
> * Save the current read-owner of rwsem, if available, and the
> * reader nonspinnable bit.
Powered by blists - more mailing lists