lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 19 May 2018 17:49:38 -0700
From:   "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:     Joel Fernandes <joel@...lfernandes.org>
Cc:     rostedt@...dmis.org, byungchul.park@....com,
        mathieu.desnoyers@...icios.com,
        Josh Triplett <josh@...htriplett.org>,
        Lai Jiangshan <jiangshanlai@...il.com>,
        linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: Tasks RCU vs Preempt RCU

On Sat, May 19, 2018 at 03:59:05PM -0700, Joel Fernandes wrote:
> On Fri, May 18, 2018 at 07:29:18PM -0700, Paul E. McKenney wrote:
> > On Fri, May 18, 2018 at 11:36:23AM -0700, Joel Fernandes wrote:
> > > Hi,
> > > 
> > > I was thinking about tasks-RCU and why its needed. Since preempt-RCU allows
> > > tasks to be preempted in read-sections, can we not just reuse that mechanism
> > > for the trampolines since we track all preempted tasks so we would wait on
> > > all tasks preempted within a trampoline?
> > > 
> > > I am trying to understand what will _not_ work if we did that.. I'm guessing
> > > the answer is that that would mean the trampoline has to be wrapped with
> > > rcu_read_{lock,unlock} which may add some overhead, but please let me know
> > > if I'm missing something else..
> > > 
> > > The advantage I guess is possible elimination of an RCU variant, and also
> > > possibly eliminating the tasks RCU thread that monitors.. Anyway I was
> > > thinking more in terms of the effort of reduction of the RCU flavors etc and
> > > reducing complexity ideas.
> > 
> > The problem is that if they are preempted while executing in a trampoline,
> > RCU-preempt doesn't queue them nor does it wait on them.
> 
> Not if they are wrapped with rcu_read_lock and rcu_read_unlock? From what I
> can see, you are preparing a list of blocked tasks that would keep the grace period
> from finishing in rcu_preempt_ctxt_queue?

But being on the ->blkd_tasks list doesn't necessarily block the current
grace period.  Only those tasks on that list that are also referenced
by ->gp_tasks (or that follow some task referenced by ->gp_tasks)
will block the current grace period.  This is be design -- otherwise,
an endless stream of tasks blocking in their RCU read-side critical
sections could prevent the current grace period from ever ending.

> > And the problem with wrapping them with rcu_read_{lock,unlock} is that
> > there would be a point before the trampoline executed rcu_read_lock()
> > but while it was on the trampoline.  Nothing good comes from this.  ;-)
> 
> Yes, I see what you're saying. The data being protected and freed in this
> case is the code so relying on it to do the rcu_read_lock seems infeasible.
> Conceptually atleast, I feel this can be fixed by cleverly implementing
> trampolines such that the rcu_read_lock isn't done during the trampoline
> execution. But I am not very experienced with how the trampolines work to say
> definitely whether it is or isn't possible or worth it. But atleast I felt it
> was a worthwhile food for thought ;)

I suggested to Steven that the rcu_read_lock() and rcu_read_unlock() might
be outside of the trampoline, but this turned out to be infeasible.  Not
that I remember why!  ;-)

> I actually want to trace out the trampoline executing as it pertains to RCU,
> with your latest rcu/dev.. I think it will be fun :)

Cool!

In addition, if you are interested, it might be worth looking for fields
in rcu_dynticks, rcu_data, rcu_node, and rcu_state that are no longer
actually used.  It might also be worth looking for RCU macros that are
no longer used.

I found a few by accident, so there are probably more...

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ