lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 19 Apr 2016 22:59:14 +0000 (UTC)
From:	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To:	rostedt <rostedt@...dmis.org>
Cc:	"H. Peter Anvin" <hpa@...or.com>, linux-kernel@...r.kernel.org,
	Ingo Molnar <mingo@...nel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Jiri Olsa <jolsa@...nel.org>,
	Masami Hiramatsu <mhiramat@...nel.org>,
	Namhyung Kim <namhyung@...nel.org>,
	linux-trace-users@...r.kernel.org
Subject: Re: [RFC][PATCH 2/4] tracing: Use pid bitmap instead of a pid array
 for set_event_pid

----- On Apr 19, 2016, at 6:49 PM, rostedt rostedt@...dmis.org wrote:

> On Tue, 19 Apr 2016 21:22:21 +0000 (UTC)
> Mathieu Desnoyers <mathieu.desnoyers@...icios.com> wrote:
> 
>> It makes sense. Anyway, looking back at my own implementation, I have
>> an array of 64 hlist_head entries (64 * 8 = 512 bytes), typically
>> populated by NULL pointers. It's only a factor 8 smaller than the
>> bitmap, so it's not a huge gain.
> 
> Actually we talked about a second level bitmap for quicker searchers. I
> can't remember what it was called, but I'm sure HPA can ;-)
> 
> Basically it was a much smaller bitmap, where each bit represents a
> number of bits in the main bitmap. When a bit is set in the main
> bitmap, its corresponding bit is set in the smaller one. This means
> that if you don't have many PIDs, the smaller bitmap wont have many
> bits set either, and you keep all the checks very cache local, because
> you are checking the smaller bitmap most of the time. But this too
> makes things more complex, especially when clearing a bit (although,
> that only happens on exit, where speed isn't a big deal). But we
> decided it still wasn't worth it.

Seems like an interesting possible improvement if ever needed.

> 
>> 
>> One alternative approach would be to keep a few entries (e.g. 16 PIDs)
>> in a fast-path lookup array that fits in a single-cache line. When the
>> number of PIDs to track go beyond that, fall-back to the bitmap instead.
>> 
>> > 
>> > Note, that the check of the bitmap to trace a task or not is not done
>> > at every tracepoint. It's only done at sched_switch, and then an
>> > internal flag is set. That flag will determine if the event should be
>> > traced, and that is a single bit checked all the time (very good for
>> > cache).
>> 
>> Could this be used by multiple tracers, and used in a multi-session scheme ?
>> In lttng, one user may want to track a set of PIDs, whereas another user may
>> be concurrently interested by another set.
> 
> I should specify, the bit isn't in the task struct, because different
> trace instances may have different criteria to what task may be traced.
> That is, you can have multiple buffers tracing multiple tasks. The
> tracepoint has a private data structure attached to it that is added
> when a tracepoint is registered. This data is a descriptor that
> represents the trace instance. This instance descriptor has a flag to
> ignore or trace the task.
> 
> 
>> 
>> Currently, in the lttng kernel tracer, we do the hash table query for
>> each tracepoint hit, which is clearly not as efficient as checking a
>> task struct flag. One option I see would be to set the task struct flag
>> whenever there is at least one tracer/tracing session that is interested
>> in this event (this would end up being a reference count on the flag). Then,
>> for every flag check that passes, lttng could do HT/bitmap lookups to see if
>> the event needs to go to each specific session.
>> 
>> Is this task struct "trace" flag currently exposed to tracers through a
>> reference-counting enable/disable API ? If not, do you think it would make
>> sense ?
>> 
> 
> Nope. As I said, it's on my own descriptor that is passed through the
> tracepoint private data.
> 
> If you look at my code, you'll notice that I pass around a
> "trace_array" struct (tr), which represents all the information about a
> single trace instance. What tracer is running, what events are enabled,
> and even keeps track of the file descriptors holding the trace event
> information. This "tr" has a per_cpu "buffer" section that contains per
> cpu data (like the ring buffer). It also has a "data" section for
> miscellaneous fields. One of them is now "ignore" which is set when
> filtering is on and the sched_switch event noticed that the new task
> shouldn't be traced for this instance.
> 
> If there's multiple instances, then there will be multiple callbacks
> done at each sched_switch. One for each instance.

Got it. I'd have to extend the buffer structures available within each
session to add this extra flag, and update it for each session from the
fork/exit events to match the per-session bitmap, as well as whenever the
bitmap is modified.

Thanks for the explanation! :)

Mathieu

> 
> -- Steve

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ