lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 25 Feb 2020 18:30:25 -0800
From:   Luigi Rizzo <lrizzo@...gle.com>
To:     linux-kernel@...r.kernel.org, mhiramat@...nel.org,
        akpm@...ux-foundation.org, gregkh@...uxfoundation.org,
        naveen.n.rao@...ux.ibm.com, changbin.du@...el.com, ardb@...nel.org,
        rizzo@....unipi.it, pabeni@...hat.com, toke@...hat.com,
        hawk@...nel.org
Cc:     Luigi Rizzo <lrizzo@...gle.com>
Subject: [PATCH 0/2] quickstats, kernel sample collector

This patchset introduces a small library to collect per-cpu samples and
accumulate distributions to be exported through debugfs.

I have used this to trace the execution times of small sections of code
(network stack functions, drivers and xdp processing, ...)
or entire functions, or latencies (e.g. IPI dispatch, etc.) and the fine
granularity helps identifying outliers, multimodal distributions,
caching effects and generally help in performance analysis.

Samples are 64-bit values that are aggregated into per-cpu logarithmic buckets
with configurable density (typical sample collectors use one buckets for
each power of 2, powers of 2, but that is very coarse and corresponds
to 1 significant bit per value; in quickstats one can specify the number
of significant bits, up to 5, which is useful for finer grain
measurements).

There are two ways to trace a block of code: manual annotations has the best
performance at runtime and is done as follows:

	// create metric and entry in debugfs
	struct kstats *key = kstats_new("foo", frac_bits);

	...
	// instrument the code
	u64 t0 = ktime_get_ns();        // about 20ns
	<section of code to measure>
	t0 = ktime_get_ns() - t0;       // about 20ns
	kstats_record(key, t0);         // 5ns hot cache, 300ns cold

This method has an accuracy of about 20-30ns (inherited from the clock)
and an overhead with hot/cold cache as show above.

Values can be read from debugfs in an easy to parse format

	# cat /sys/kernel/debug/kstats/foo
	...
	slot 55  CPU  0    count      589 avg      480 p 0.027613
	slot 55  CPU  1    count       18 avg      480 p 0.002572
	slot 55  CPU  2    count       25 avg      480 p 0.003325
	...
	slot 55  CPUS 28   count      814 avg      480 p 0.002474
	...
	slot 97  CPU  13   count     1150 avg    20130 p 0.447442
	slot 97  CPUS 28   count   152585 avg    19809 p 0.651747
	...

And write to the file STOP, START, RESET executes the corresponding action

	echo RESET > /sys/kernel/debug/kstats/foo

The second instrumentation mechanism uses kretrprobes or tracepoints and
lets tracing be enabled or removed at runtime from the command line for
any globally visible function

	echo trace some_function bits 3 > /sys/kernel/debug/kstats/_control

Data are exported or controlled as abube. Accuracy is worse due to the
presence of kretprobe trampolines: 90ns with hot cache, 500ns with cold
cache. The overhead on the traced function is 250ns hot, 1500ns cold.

Hope you find this useful

Luigi Rizzo (2):
  quickstats, kernel sample collector
  quickstats: user commands to trace execution time of code

 include/linux/kstats.h |  61 ++++
 lib/Kconfig.debug      |   7 +
 lib/Makefile           |   1 +
 lib/kstats.c           | 659 +++++++++++++++++++++++++++++++++++++++++
 4 files changed, 728 insertions(+)
 create mode 100644 include/linux/kstats.h
 create mode 100644 lib/kstats.c

-- 
2.25.0.265.gbab2e86ba0-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ