lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150327200922.GL21510@kernel.org>
Date:	Fri, 27 Mar 2015 17:09:22 -0300
From:	Arnaldo Carvalho de Melo <acme@...nel.org>
To:	Don Zickus <dzickus@...hat.com>
Cc:	David Ahern <dsahern@...il.com>, linux-kernel@...r.kernel.org,
	Joe Mario <jmario@...hat.com>, Jiri Olsa <jolsa@...hat.com>
Subject: Re: [PATCH v2] perf tool: Fix ppid for synthesized fork events

Em Fri, Mar 27, 2015 at 03:49:41PM -0400, Don Zickus escreveu:
> On Fri, Mar 27, 2015 at 11:20:36AM -0300, Arnaldo Carvalho de Melo wrote:
> > ... which is what David is suggesting here:
> >  
> > > Try this:
> > > perf record -o unpatched.data -g -- perf.unpatched mem record -a -e
> > > cpu/mem-loads,ldlat=50/pp -e cpu/mem-stores/pp sleep 10
> > > 
> > > perf record -o patched.data -g -- perf.patched mem record -a -e
> > > cpu/mem-loads,ldlat=50/pp -e cpu/mem-stores/pp sleep 10
> > > 
> > > And then compare the reports for each.
> > 
> > Cache effects, i.e. OS FS caches for the files accessed when doing the
> > build id table could be responsible for part of the difference at some
> > point, but further investigation by using 'perf record'
> > patched/unpatched will give us more clues.
> 
> Alright, Joe and I poked some more and as I thought, David's patch does
> something subtle which may have inadvertently undid my original patch.
> Though the threading model isn't clear in my head right now.
> 
> Here is the patch I added to test a theory:
> 
> diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
> index 1c8fbc9..7ee3823 100644
> --- a/tools/perf/util/thread.c
> +++ b/tools/perf/util/thread.c
> @@ -187,6 +187,7 @@ static int thread__clone_map_groups(struct thread *thread,
>  	if (thread->pid_ == parent->pid_)
>  		return 0;
>  
> +	printf("DON:\n");
>  	/* But this one is new process, copy maps. */
>  	for (i = 0; i < MAP__NR_TYPES; ++i)
>  		if (map_groups__clone(thread->mg, parent->mg, i) < 0)
> 
> before David's patch, we do _not_ see any DON markers.  After David's patch
> we see a 1:1 match of DON markers to the number of threads currently running
> in the system.
> 
> As a result the perf record -g command David recommended showed a spike in
> rb_next and map_groups__clone which is the result of the above discovery.
> 
> 
> So the next question is, is this correct?  On the surface I would say no
> because it doesn't seem like we are not being smart any more and taking
> advantage of the existing thread maps created.  But I guess the idea behind
> cloning is that we are.
> 
> I can't think right now what is the correct behaviour, thoughts?

ok, so if in perf_event__synthesize_fork we correctly set up
event->fork.ppid, when we call process() there it will end up calling:

	perf_event__process_fork()
		machine__process_fork_event()

And that will call:

        struct thread *thread = machine__find_thread(machine,
                                                     event->fork.pid,
                                                     event->fork.tid);
        struct thread *parent = machine__findnew_thread(machine,
                                                        event->fork.ppid,
                                                        event->fork.ptid);

Without David's patch the second will pass -1 to both ppid and ptid,
right? That will find a fake thread with ppid == -1 and ptid == -1, that
has no mmaps.

Next thing perf_event__synthesize_fork() will do is to call:

  thread__fork(thread, parent, sample->time)

And that is what will end up calling:

	thread__clone_map_groups(thread, parent)
		map_groups__clone(thread->mg, parent->mg, i)

So, assuming the parent was synthesized first and got its mmaps, then
when the child is processed it will find and will clone its mmaps.

- Arnaldo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ