[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87ftexz93y.fsf@toke.dk>
Date: Wed, 26 Feb 2020 16:00:49 +0100
From: Toke Høiland-Jørgensen <toke@...hat.com>
To: Luigi Rizzo <lrizzo@...gle.com>, linux-kernel@...r.kernel.org,
mhiramat@...nel.org, akpm@...ux-foundation.org,
gregkh@...uxfoundation.org, naveen.n.rao@...ux.ibm.com,
ardb@...nel.org, rizzo@....unipi.it, pabeni@...hat.com,
giuseppe.lettieri@...pi.it, hawk@...nel.org, mingo@...hat.com,
acme@...nel.org, rostedt@...dmis.org, peterz@...radead.org
Cc: Luigi Rizzo <lrizzo@...gle.com>
Subject: Re: [PATCH v3 0/2] kstats: kernel metric collector
Luigi Rizzo <lrizzo@...gle.com> writes:
> This patchset introduces a small library to collect per-cpu samples and
> accumulate distributions to be exported through debugfs.
>
> This v3 series addresses some initial comments (mostly style fixes in the
> code) and revises commit logs.
Could you please add a proper changelog spanning all versions of the
patch as you iterate?
As for the idea itself; picking up this argument you made on v1:
> The tracepoint/kprobe/kretprobe solution is much more expensive --
> from my measurements, the hooks that invoke the various handlers take
> ~250ns with hot cache, 1500+ns with cold cache, and tracing an empty
> function this way reports 90ns with hot cache, 500ns with cold cache.
I think it would be good if you could include an equivalent BPF-based
implementation of your instrumentation example so people can (a) see the
difference for themselves and get a better idea of how the approaches
differ in a concrete case and (b) quantify the difference in performance
between the two implementations.
-Toke
Powered by blists - more mailing lists