[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5600CAC6.7050504@gmail.com>
Date: Mon, 21 Sep 2015 21:28:06 -0600
From: David Ahern <dsahern@...il.com>
To: Yunlong Song <yunlong.song@...wei.com>, a.p.zijlstra@...llo.nl,
paulus@...ba.org, mingo@...hat.com, acme@...nel.org,
rostedt@...dmis.org, ast@...nel.org, jolsa@...nel.org,
Namhyung Kim <namhyung@...nel.org>,
masami.hiramatsu.pt@...achi.com, adrian.hunter@...el.com,
bp@...en8.de, rric@...nel.org
Cc: linux-kernel@...r.kernel.org, wangnan0@...wei.com
Subject: Re: [RFC resend] Perf: Trigger and dump sample info to perf.data from
user space ring buffer
On 9/21/15 9:16 PM, Yunlong Song wrote:
> [Problem Background]
>
> We want to run perf in daemon mode and collect the traces when the exception
> (e.g., machine crashes, app performance goes down) appears. Perf may run for a
> long time (from days to weeks or even months), since we do not know when the
> exception will appear at all, however it will appear at some time (especially
> for a beta product). If we simply use “perf record” as usual, here come two
> problems as time goes by: 1 there will be large amounts of IOs created for writing
> perf.data which may affects the performance a lot; 2 the size of perf.data will
> be larger and larger as well. Although we can use eBPF to reduce the traces in
> normal case, but in our case, the perf runs in daemon mode for a long time and
> that will accumulate the traces as time goes by.
This is a perf-based scheduling daemon I wrote a few years ago:
https://github.com/dsahern/linux/blob/perf/full-monty/tools/perf/schedmon.c
It solves a similar problem by holding the last N-seconds or M-bytes of
events in memory. When something of significance happens it is notified
to dump events to a file. The events are scheduling tracepoints but
could easily be anything else.
David
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists