[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFwZWi6ecDmVsMBQJTrgrW3GD2DaRtpiOspe=5amR1=dNg@mail.gmail.com>
Date: Thu, 9 Apr 2015 11:16:24 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Paul McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@...nel.org>, Jason Low <jason.low2@...com>,
Peter Zijlstra <peterz@...radead.org>,
Davidlohr Bueso <dave@...olabs.net>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Aswin Chandramouleeswaran <aswin@...com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/2] locking/rwsem: Use a return variable in rwsem_spin_on_owner()
On Thu, Apr 9, 2015 at 11:08 AM, Linus Torvalds
<torvalds@...ux-foundation.org> wrote:
>
> The pointer is a known-safe kernel pointer - it's just that it was
> "known safe" a few instructions ago, and might be rcu-free'd at any
> time.
Actually, we could even do something like this:
static inline int sem_owner_on_cpu(struct semaphore *sem, struct
task_struct *owner)
{
int on_cpu;
#ifdef CONFIG_DEBUG_PAGEALLOC
rcu_read_lock();
#endif
on_cpu = sem->owner == owner && owner->on_cpu;
#ifdef CONFIG_DEBUG_PAGEALLOC
rcu_read_unlock();
#endif
return on_cpu;
}
because we really don't need to hold the RCU lock over the whole loop,
we just need to validate that the semaphore owner still matches, and
if so, check that it's on_cpu.
And if CONFIG_DEBUG_PAGEALLOC is set, we don't care about performance
*at*all*. We will have worse performance problems than doing some RCU
read-locking inside the loop.
And if CONFIG_DEBUG_PAGEALLOC isn't set, we don't really care about
locking, since at worst we just access stale memory for one iteration.
Hmm. It's not pretty, but neither is the current "let's just take a
rcu lock that we don't really need over a loop that doesn't have very
strict bounding".
Comments?
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists