lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070112171512.GB2888@Krystal>
Date:	Fri, 12 Jan 2007 12:15:12 -0500
From:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
To:	Nick Piggin <nickpiggin@...oo.com.au>
Cc:	linux-kernel@...r.kernel.org, Linus Torvalds <torvalds@...l.org>,
	Andrew Morton <akpm@...l.org>, Ingo Molnar <mingo@...hat.com>,
	Greg Kroah-Hartman <gregkh@...e.de>,
	Christoph Hellwig <hch@...radead.org>, ltt-dev@...fik.org,
	systemtap@...rces.redhat.com,
	Douglas Niehaus <niehaus@...s.ku.edu>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 05/05] Linux Kernel Markers, non optimised architectures

* Nick Piggin (nickpiggin@...oo.com.au) wrote:
> Mathieu Desnoyers wrote:
> >* Nick Piggin (nickpiggin@...oo.com.au) wrote:
> >
> >>Mathieu Desnoyers wrote:
> >>
> >>
> >>>+#define MARK(name, format, args...) \
> >>>+	do { \
> >>>+		static marker_probe_func *__mark_call_##name = \
> >>>+					__mark_empty_function; \
> >>>+		volatile static char __marker_enable_##name = 0; \
> >>>+		static const struct __mark_marker_c __mark_c_##name \
> >>>+			__attribute__((section(".markers.c"))) = \
> >>>+			{ #name, &__mark_call_##name, format } ; \
> >>>+		static const struct __mark_marker __mark_##name \
> >>>+			__attribute__((section(".markers"))) = \
> >>>+			{ &__mark_c_##name, &__marker_enable_##name } ; \
> >>>+		asm volatile ( "" : : "i" (&__mark_##name)); \
> >>>+		__mark_check_format(format, ## args); \
> >>>+		if (unlikely(__marker_enable_##name)) { \
> >>>+			preempt_disable(); \
> >>>+			(*__mark_call_##name)(format, ## args); \
> >>>+			preempt_enable_no_resched(); \
> >>
> >>Why not just preempt_enable() here?
> >>
> >
> >
> >Because the preempt_enable() macro contains preempt_check_resched(), which
> >may call preempt_schedule() which leads us to a call to schedule(). 
> >Therefore,
> >all those very interesting scheduler functions would cause an infinite
> >recursive scheduler call if we marked schedule() and used preempt_enable() 
> >in
> >the marker.
> 
> The vast majority of schedule() has preempt turned off, so that shouldn't
> be a problem, if you provide a comment.
> 
> >The primary goal for the markers (and the probes that attaches to them) is 
> >to
> >have the fewest side-effects possible : any kernel method called from an
> >instrumentation site adds this precise kernel method to the "cannot be
> >instrumented" list, which I want to keep as small possible.
> 
> OK, well one problem is that it can cause a resched event to be lost, so
> you might say it has more side-effects without checking resched.
> 

I agree : this a side-effect I pointed out in my LTTng presentation last
summer at OLS.

Here is a quick idea of the potentially problematic instrumentation points
(i386 example) :

- with the task_rq_lock held (therefore preemption is disabled, so it's not a
  problem)
sched.c wait_task_inactive()
sched.c try_to_wake_up()
sched.c wake_up_new_task()
sched.c sched_migrate_task()

sched.c schedule() after prepare_task_switch call, before context_switch call.
  Surrounded by preempt_disable(), preempt_enable_no_resched(), should be ok.

- IRQs : irq_enter()/irq_exit() calls in do_IRQ makes sure that the
  preempt_count is incremented. irq_enter() is called with interrupts still
  disabled.
kernel/irq/handle.c handle_IRQ_event()

- NMIs : nmi_enter() -> irq_enter() -> add_preempt_count(HARDIRQ_OFFSET) called
  with interrupts still disabled.
  Therefore, preemption is disabled within trace points in do_nmi.

- traps : GPF, do_trap, do_page_fault, do_debug, spurious_interrupt,
  math_emulate.
  It is not uncommon for these trap handlers to reenable interrupts very soon.
  They do not increment the preemption count.
  Therefore, preemption must be expected when these handlers run : we cannot
  rely of the fact that hard IRQs would be disabled to prevent the scheduler
  from running, as markers becomes a new source of scheduler events.

- local_irq_enable()/local_irq_disable() :
  It can call trace_hardirqs_on()/trace_hardirqs_off(). These macros are
  sprinkled in _every_ possible context cited above, from trap handlers to
  preemptible code.

Other contexts or code location are not a problem (process context, softirq).

If we are sure that we expect calls to preempt_schedule() from each of these
contexts, then it's ok to put preempt_enable(). It is important to note that a
marker would then act as a source of scheduler events in code paths where
disabling interrupts is expected to disable the scheduler.

Mathieu

-- 
OpenPGP public key:              http://krystal.dyndns.org:8080/key/compudj.gpg
Key fingerprint:     8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ