lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090509105742.GA8398@localhost>
Date:	Sat, 9 May 2009 18:57:42 +0800
From:	Wu Fengguang <fengguang.wu@...el.com>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	Frédéric Weisbecker <fweisbec@...il.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Li Zefan <lizf@...fujitsu.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	LKML <linux-kernel@...r.kernel.org>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Andi Kleen <andi@...stfloor.org>,
	Matt Mackall <mpm@...enic.com>,
	Alexey Dobriyan <adobriyan@...il.com>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [patch] tracing/mm: add page frame snapshot trace

On Sat, May 09, 2009 at 06:01:37PM +0800, Ingo Molnar wrote:
> 
> * Wu Fengguang <fengguang.wu@...el.com> wrote:
> 
> > 2) support concurrent object iterations
> >    For example, a huge 1TB memory space can be split up into 10
> >    segments which can be queried concurrently (with different options).
> 
> this should already be possible. If you lseek the trigger file, that 
> will be understood as an 'offset' by the patch, and then write a 
> (decimal) value into the file, that will be the count.
> 
> So it should already be possible to fork off nr_cpus helper threads, 
> one bound to each CPU, each triggering trace output of a separate 
> segment of the memory map - and each reading that CPU's 
> trace_pipe_raw file to recover the data - all in parallel.

How will this work out in general? More examples, when walking pages
by file/process, is it possible to divide the files/processes into N
sets, and dump their pages concurrently? When walking the (huge) inode
lists of different superblocks, is it possible to fork one thread for
each superblock?

In the above situations, they would demand concurrent instances with
different filename/pid/superblock options.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ