lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 8 Aug 2014 16:34:13 +0200
From:	Peter Zijlstra <>
To:	Steven Rostedt <>
Cc:	"Paul E. McKenney" <>,
	Oleg Nesterov <>,,,,,,,,,,,,,,
Subject: Re: [PATCH v3 tip/core/rcu 3/9] rcu: Add synchronous grace-period
 waiting for RCU-tasks

On Fri, Aug 08, 2014 at 10:12:21AM -0400, Steven Rostedt wrote:
> > Ok, so they're purely used in the function prologue/epilogue callchain.
> No, they are also used by optimized kprobes. This is why optimized
> kprobes depend on !CONFIG_PREEMPT. [ added Masami to the discussion ].

How do those work? Is that one where the INT3 relocates the instruction
stream into an alternative 'text' and that JMPs back into the original
stream at the end?

And what is there to make sure the kprobe itself doesn't do 'funny'?

> Which reminds me. On !CONFIG_PREEMPT, call_rcu_task() should be
> equivalent to call_rcu_sched().

Sure, as long as you make absolutely sure none of that code ends up
calling cond_resched()/might_sleep() etc. Which I think you already said
was true, so no worries there.

> > And you don't want to use synchronize_tasks() because registering a trace
> > functions is atomic ?
> No. Has nothing to do with registering the trace function. The issue is
> that we have no idea when a task happens to be on a trampoline after it
> is registered. For example:
> ops adds a callback to sys_read:
> sys_read() {
>  call trampoline ->
>     set up regs for function call.
>     <interrupt>
>       preempt_schedule();
>       [ new task runs for long time ]
> While this new task is running, we remove the trampoline and want to
> free it. Say this new task keeps the other task from running for
> minutes! We call synchronize_sched() or any other rcu call, and all
> grace periods finish and we free the trampoline. The sys_read() no
> longer calls our trampoline. Doesn't matter, because that task is still
> on it. Now we schedule that task back. It's on a trampoline that has
> just been freed! BOOM. It's executing code that no longer exits.

Sure, I get that part. What I was getting as is _WHY_ you need
call_rcu_task(), why isn't synchronize_tasks() good enough?

> > No need for extra allocations and fancy means of getting rid of them,
> > and only a few bytes extra wrt the existing function.
> This doesn't address the issue we want to solve.
> Say we have 1000 functions we want to trace with 1000 different
> callbacks. Each of theses functions has one call back. How do you solve
> that with your solution? Today, we do the list for every function. That
> is, for each of these 1000 functions, we run through 1000 ops looking
> for the ops that registered for this function. Not very efficient is it?

Ah, but you didn't say that, didn't you :-)

> What we want to do today, is to create a dynamic trampoline for each of
> theses 1000 functions. Each function will call a separate trampoline
> that will only call the function that was registered to it. That way,
> we can have 1000 different ops registered to 1000 different functions
> and still have the same performance.

And how will you limit the amount of memory tied up in this? This looks
like a good way to tie up an immense amount of memory fast.

Content of type "application/pgp-signature" skipped

Powered by blists - more mailing lists