lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100203212648.GC5068@nowhere>
Date:	Wed, 3 Feb 2010 22:26:51 +0100
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Arnaldo Carvalho de Melo <acme@...hat.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Paul Mackerras <paulus@...ba.org>,
	Hitoshi Mitake <mitake@....info.waseda.ac.jp>,
	Li Zefan <lizf@...fujitsu.com>,
	Lai Jiangshan <laijs@...fujitsu.com>,
	Masami Hiramatsu <mhiramat@...hat.com>,
	Jens Axboe <jens.axboe@...cle.com>
Subject: Re: [RFC GIT PULL] perf/trace/lock optimization/scalability
	improvements

On Wed, Feb 03, 2010 at 11:26:11AM +0100, Ingo Molnar wrote:
> There's one area that needs more thought i think: the dump-all-classes 
> init-event-injector approach. It is async, hence we could lose events if 
> there's a lot of lock classes to dump.


Yeah, the dump itself is purely async.

But the lock_class_init event is used from two sites:

- the injector, purely aynchronous and catch up from the past
  thing

- register_lock_class(), this is the synchronous point, each time
  we have a new class created.

When we register a lock_class_init ftrace/perf event, we first activate
the synchronous point, it behaves here as a usual event, and will hook
in every present new events.

And only after, we call the injector, asynchronous, rely on past events.

That beeing split in two part covers every events.


> Plus we eventually want to use your 
> injector approach for other things as well (such as to dump the state of a 
> collection of tasks) - so i think we want it to be more synchronous.


Yeah, that would work also for tasks. And we can follow the same
pattern for that.

We can set up a synchronous trace point in fork and have a secondary
asynchronous point that can dump the task list. That too would
cover every events we want.

 
> One approach would be to allow a gradual read() deplete the dump. Also, i 
> think the 'state dump' events should be separate from regular init events. 
> Filters attached to these events will automatically cause the dumping to be 
> restricted to the filter set. For example in the case of tasks one could dump 
> only tasks from a particular UID - by adding a 'uid == 1234' filter before 
> the dump (on a per tasks basis - so the filtering is nicely task local).


But this is what we want right? If the init event and the dump event are
the same, which is the case currently, the filter will apply to both.

And if we are only interested in tasks for uid == 1234, I guess we
want async and sync events that have the same filter.

May be we want to split up init events from dump events, say,
have an event class that you can open in either async or
sync mode. But I can't figure out a workflow for which it can
be useful.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ