lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 25 Mar 2010 16:47:44 +0800
From:	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
To:	Li Zefan <lizf@...fujitsu.com>
Cc:	Ingo Molnar <mingo@...e.hu>,
	Arnaldo Carvalho de Melo <acme@...hat.com>,
	Avi Kivity <avi@...hat.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	linux-kernel@...r.kernel.org, Sheng Yang <sheng@...ux.intel.com>,
	oerg Roedel <joro@...tes.org>,
	Jes Sorensen <Jes.Sorensen@...hat.com>,
	Marcelo Tosatti <mtosatti@...hat.com>,
	Gleb Natapov <gleb@...hat.com>, kvm@...r.kernel.org,
	zhiteng.huang@...el.com, Zachary Amsden <zamsden@...hat.com>
Subject: Re: [PATCH 3/3] perf events: Change perf parameter --pid to
 process-wide collection instead of thread-wide

On Thu, 2010-03-25 at 16:02 +0800, Li Zefan wrote:
> Zhang, Yanmin wrote:
> > From: Zhang, Yanmin <yanmin_zhang@...ux.intel.com>
> > 
> > Parameter --pid (or -p) of perf currently means a thread-wide collection.
> > For exmaple, if a process whose id is 8888 has 10 threads, 'perf top -p 8888'
> > just collects the main thread statistics. That's misleading. Users are
> > used to attach a whole process when debugging a process by gdb. To follow
> > normal usage style, the patch change --pid to process-wide collection and
> > add --tid (-t) to mean a thread-wide collection.
> > 
> > Usage example is:
> > #perf top -p 8888
> > #perf record -p 8888 -f sleep 10
> > #perf stat -p 8888 -f sleep 10
> > Above commands collect the statistics of all threads of process 8888.
> > 
> > Signed-off-by: Zhang Yanmin <yanmin_zhang@...ux.intel.com>
> > 
> 
> Seems this patch causes seg faults:
> 
> # ./perf sched record
> Segmentation fault
> # ./perf kmem record
> Segmentation fault
> # ./perf timechart record
> Segmentation fault

Thanks for reporting it. Arnaldo, could you pick up below patch?
Zefan, Could you try it?

mmap_array[][][] is not reset to 0 after malloc. Below patch against
tip/master of March 24th fixes it with a zalloc.

Reported-by:	Li Zefan <lizf@...fujitsu.com>
Signed-off-by:	Zhang Yanmin <yanmin_zhang@...ux.intel.com>

---

diff -Nraup linux-2.6_tip0324/tools/perf/builtin-record.c linux-2.6_tip0324_perfkvm/tools/perf/builtin-record.c
--- linux-2.6_tip0324/tools/perf/builtin-record.c	2010-03-25 10:58:13.308912201 +0800
+++ linux-2.6_tip0324_perfkvm/tools/perf/builtin-record.c	2010-03-25 16:14:18.201475298 +0800
@@ -751,7 +751,7 @@ int cmd_record(int argc, const char **ar
 	for (i = 0; i < MAX_NR_CPUS; i++) {
 		for (j = 0; j < MAX_COUNTERS; j++) {
 			fd[i][j] = malloc(sizeof(int)*thread_num);
-			mmap_array[i][j] = malloc(
+			mmap_array[i][j] = zalloc(
 				sizeof(struct mmap_data)*thread_num);
 			if (!fd[i][j] || !mmap_array[i][j])
 				return -ENOMEM;
diff -Nraup linux-2.6_tip0324/tools/perf/builtin-top.c linux-2.6_tip0324_perfkvm/tools/perf/builtin-top.c
--- linux-2.6_tip0324/tools/perf/builtin-top.c	2010-03-25 10:58:13.284848937 +0800
+++ linux-2.6_tip0324_perfkvm/tools/perf/builtin-top.c	2010-03-25 16:14:56.875266645 +0800
@@ -1371,7 +1371,7 @@ int cmd_top(int argc, const char **argv,
 	for (i = 0; i < MAX_NR_CPUS; i++) {
 		for (j = 0; j < MAX_COUNTERS; j++) {
 			fd[i][j] = malloc(sizeof(int)*thread_num);
-			mmap_array[i][j] = malloc(
+			mmap_array[i][j] = zalloc(
 				sizeof(struct mmap_data)*thread_num);
 			if (!fd[i][j] || !mmap_array[i][j])
 				return -ENOMEM;


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ