lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 1 Apr 2008 18:25:48 -0400
From:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	paulmck@...ux.vnet.ibm.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH for 2.6.25] Markers - use synchronize_sched()

* Andrew Morton (akpm@...ux-foundation.org) wrote:
> On Mon, 31 Mar 2008 09:16:09 -0400
> Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca> wrote:
> 
> > Use synchronize_sched before calling call_rcu in CONFIG_PREEMPT_RCU until we
> > have call_rcu_sched and rcu_barrier_sched in mainline. It will slow down the
> > marker operations in CONFIG_PREEMPT_RCU, but it fixes the current race against
> > the preempt_disable/enable() protected code paths.
> 
> A better changelog would have described the bug which is being fixed.
> 

Hi Andrew,

Right, this could be appended to the changelog then :

Markers do not mix well with CONFIG_PREEMPT_RCU because it uses
preempt_disable/enable() and not rcu_read_lock/unlock for minimal
intrusiveness. We would need call_sched and sched_barrier primitives.

Currently, the modification (connection and disconnection) of probes
from markers requires changes to the data structure done in RCU-style :
a new data structure is created, the pointer is changed atomically, a
quiescent state is reached and then the old data structure is freed.

The quiescent state is reached once all the currently running
preempt_disable regions are done running. We use the call_rcu mechanism
to execute kfree() after such quiescent state has been reached. However,
the new CONFIG_PREEMPT_RCU version of call_rcu and rcu_barrier does not
guarantee that all preempt_disable code regions have finished, hence the
race.

The "proper" way to do this is to use rcu_read_lock/unlock, but we don't
want to use it to minimize intrusiveness on the traced system. (we do
not want the marker code to call into much of the OS code, because it
would quickly restrict what can and cannot be instrumented, such as the
scheduler).

The temporary fix, until we get call_rcu_sched and rcu_barrier_sched in
mainline, is to use synchronize_sched before each call_rcu calls, so we
wait for the quiescent state in the system call code path. It will slow
down batch marker enable/disable, but will make sure the race is gone.

Thanks,

Mathieu

> > Paul, is this ok ? It would be good to get this in for 2.6.25 final.
> 
> Paul seems to have nodded off.  I'll merge it.

-- 
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ