[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFzXMDjQQ7jTjsPdh1RikXfgV7OCd-+13cz06MOmDBA33w@mail.gmail.com>
Date: Thu, 9 Apr 2015 11:08:11 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Paul McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@...nel.org>, Jason Low <jason.low2@...com>,
Peter Zijlstra <peterz@...radead.org>,
Davidlohr Bueso <dave@...olabs.net>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Aswin Chandramouleeswaran <aswin@...com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/2] locking/rwsem: Use a return variable in rwsem_spin_on_owner()
On Thu, Apr 9, 2015 at 10:56 AM, Paul E. McKenney
<paulmck@...ux.vnet.ibm.com> wrote:
>
> And if such long-term spins are likely, I cannot resist asking if this
> should be instead using SRCU. If you have your own srcu_struct, you
> get to delay your own SRCU grace periods as long as you want. ;-)
No, this is plain RCU, and it is only needed because the 'struct
task_struct' is RCU-allocated, and we do an optimistic access of that
'owner->on_cpu' without actually holding any locks.
And even *that* wouldn't be needed if it wasn't for DEBUG_PAGEALLOC.
We could just access stale memory.
I wonder if we should get rid of the whole RCU thing (which does add
overhead to a potentially critical piece of code), and replace it with
a new "optimistic_kernel_read()" function that basically just does a
memory read with an exception table entry (ie like __get_user(), but
without any of the user access overhead - no clac etc), so that if we
fault due to DEBUG_PAGEALLOC it just ignores the fault.
Hmm? I think there might be a few other places that currently get RCU
read locks just because they want to do an optimistic read of
something that migth be going away from under them.
The pointer is a known-safe kernel pointer - it's just that it was
"known safe" a few instructions ago, and might be rcu-free'd at any
time.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists