lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140807200625.GA3935@laptop>
Date:	Thu, 7 Aug 2014 22:06:25 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Steven Rostedt <rostedt@...dmis.org>
Cc:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Oleg Nesterov <oleg@...hat.com>, linux-kernel@...r.kernel.org,
	mingo@...nel.org, laijs@...fujitsu.com, dipankar@...ibm.com,
	akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
	josh@...htriplett.org, tglx@...utronix.de, dhowells@...hat.com,
	edumazet@...gle.com, dvhart@...ux.intel.com, fweisbec@...il.com,
	bobby.prani@...il.com
Subject: Re: [PATCH v3 tip/core/rcu 3/9] rcu: Add synchronous grace-period
 waiting for RCU-tasks

On Thu, Aug 07, 2014 at 03:49:07PM -0400, Steven Rostedt wrote:
> On Thu, 7 Aug 2014 20:46:35 +0200
> Peter Zijlstra <peterz@...radead.org> wrote:
> 
> > On Thu, Aug 07, 2014 at 07:27:53PM +0200, Peter Zijlstra wrote:
> > > Right, Steve (and Paul) please explain _why_ this is an 'RCU' at all?
> > > _Why_ do we have call_rcu_task(), and why is it entwined in the 'normal'
> > > RCU stuff? We've got SRCU -- which btw started out simple, without
> > > call_srcu() -- and that lives entirely independent. And SRCU is far more
> > > an actual RCU than this thing is, its got read side primitives and
> > > everything.
> > > 
> > > Also, I cannot think of any other use besides trampolines for this
> > > thing, but that might be my limited imagination.
> > 
> > Also, trampolines can end up in the return frames, right? So how can you
> > be sure when to wipe them? Passing through schedule() isn't enough for
> > that.
> 
> Not sure what you mean.

void bar()
{
	mutex_lock();
	...
	mutex_unlock();
}

void foo()
{
	bar();
}

Normally that'll give you a stack/return frame like:

 foo()
   bar()
     mutex_lock()
       schedule();

Now suppose there's a trampoline around bar(), that would give:

  foo()
    __trampoline()
      bar()
        mutex_lock()
	  schedule()

so the function return of bar doesn't point to foo, but to the
trampoline. But we call schedule() from mutex_lock() and think we're all
good.

> > Userspace is, but kernel threads typically don't ever end up there.

> Hence, once something calls schedule() directly, we know that it is not
> on a trampoline, nor is it going to return to one.

How can you say its not going to return to one?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ