[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.10.1308231704450.24113@vincent-weaver-1.um.maine.edu>
Date: Fri, 23 Aug 2013 17:08:11 -0400 (EDT)
From: Vince Weaver <vincent.weaver@...ne.edu>
To: Borislav Petkov <bp@...en8.de>
cc: Robert Richter <rric@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Arnaldo Carvalho de Melo <acme@...radead.org>,
Jiri Olsa <jolsa@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 12/12] [RFC] perf, persistent: ioctl functions to
control persistency
On Fri, 23 Aug 2013, Borislav Petkov wrote:
> Maybe this makes it more understandable for you but this is beside the
> point.
Understandability doesn't matter?
> But I have to say the reversed thing above does sound confusing, now
> that I'm looking at the code. Actually, at the time we discussed this,
> my idea was to do it like this:
>
> 1. we open a perf event and get its file descriptor
> 2. ioctl ATTACH to it so that it is attached to the process.
>
> ... do some tracing and collecting and fiddling...
>
> 3. ioctl DETACH from it so that it is "forked in the background" so to
> speak, very similar to a background job in the shell.
Would it make sense to actually fork a kernel thread that "owns" the
event?
The way it is now events can "get loose" if either the user
forgets about them or the tool that opened them crashes, and it's
impossible to kill these events with normal tools. You possibly
wouldn't even know one was running (except you'd have one fewer
counter to work with) unless you poked around under /sys.
> 4. The rest of the code continues and deallocates the event *BUT* (and
> this is the key thing!) the counter/tracepoint remains operational in
> the kernel, running all the time.
>
> 5. Now, after a certain point, you come back and ioctl ATTACH to this
> already opened event and read/collect its buffers again.
Vince
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists