lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210601130751.25750-2-james.clark@arm.com>
Date:   Tue,  1 Jun 2021 16:07:51 +0300
From:   James Clark <james.clark@....com>
To:     acme@...nel.org, mathieu.poirier@...aro.org,
        coresight@...ts.linaro.org, leo.yan@...aro.org
Cc:     al.grant@....com, branislav.rankov@....com, denik@...omium.org,
        suzuki.poulose@....com, anshuman.khandual@....com,
        James Clark <james.clark@....com>,
        Mike Leach <mike.leach@...aro.org>,
        John Garry <john.garry@...wei.com>,
        Will Deacon <will@...nel.org>,
        Mark Rutland <mark.rutland@....com>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Jiri Olsa <jolsa@...hat.com>,
        Namhyung Kim <namhyung@...nel.org>,
        linux-arm-kernel@...ts.infradead.org,
        linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [RFC PATCH v4 1/1] perf cs-etm: Split Coresight decode by aux records

Populate the auxtrace queues using AUX records rather than whole
auxtrace buffers.

This is similar to the auxtrace_queues__process_index() ->
auxtrace_queues__add_indexed_event() flow where
perf_session__peek_event() is used to read AUXTRACE events out of random
positions in the file based on the auxtrace index.

Now there is a second pass using perf_session__peek_events() which loops
over all PERF_RECORD_AUX events, finds the corresponding auxtrace
buffer, and adds a fragment of that buffer to the auxtrace queues,
rather than the whole buffer.

This is a completely new implementation compared to v1 and v2 of the
patchset where significant modifications were made to the decoding flow,
including saving all of the AUX records, and changing the point where
decoding started. These changes had effects on the outcome of decoding,
but with this implementation, the output of decoding is identical to
perf/core, except in the case where decoding failed due to the issue
that this change addresses. No changes to decoding were made, apart
from populating the auxtrace queues.

Because the decoder is already reset between each auxtrace buffer,
populating the queues with fragments of whole buffers based on AUX
records has the same affect as resetting the decoder on the AUX record
bounds during decode. But this approach only requires a change to one
part of the whole flow, rather than many parts.

There is still one TODO about how buffers are found. Currently there is
a plain loop as a search, but this could be improved with a binary
search or keeping a record of which buffers have already been added and
starting the search from the last used place. AUX records and buffers
will come in similar orders in the file so the latter would be simpler
and probably be good enough. But I'm looking for feedback about the
approach as a whole before ironing out all of the details.

Other issues that were fixed by the previous patchsets such as fixing
the case where the first aux record comes before the first MMAP record
can now be fixed as completely independent issues.

Testing
=======

Testing was done with the following script, to diff the decoding results
between the patched and un-patched versions of perf:

	#!/bin/bash
	set -ex

	$1 script -i $3 $4 > split.script
	$2 script -i $3 $4 > default.script

	diff split.script default.script | head -n 20

And it was run like this, with various itrace options depending on the
quantity of synthesised events:

	compare.sh ./perf-patched ./perf-default perf-per-cpu-2-threads.data --itrace=i100000ns

No changes in output were observed in the following scenarios:

* Simple per-cpu
	perf record -e cs_etm/@..._etr0/u top

* Per-thread, single thread
	perf record -e cs_etm/@..._etr0/u --per-thread ./threads_C

* Per-thread multiple threads (but only one thread collected data):
	perf record -e cs_etm/@..._etr0/u --per-thread --pid 4596,4597

* Per-thread multiple threads (both threads collected data):
	perf record -e cs_etm/@..._etr0/u --per-thread --pid 4596,4597

* Per-cpu explicit threads:
	perf record -e cs_etm/@..._etr0/u --pid 853,854

* System-wide (per-cpu):
    perf record -e cs_etm/@..._etr0/u -a

* No data collected (no aux buffers)
	Can happen with any command when run for a short period

* Containing truncated records
	Can happen with any command

* Containing aux records with 0 size
	Can happen with any command

* Snapshot mode
	perf record -e cs_etm/@..._etr0/u -a --snapshot

Signed-off-by: James Clark <james.clark@....com>
---
 tools/perf/util/cs-etm.c | 159 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 156 insertions(+), 3 deletions(-)

diff --git a/tools/perf/util/cs-etm.c b/tools/perf/util/cs-etm.c
index 64536a6ed10a..053aad4b28cf 100644
--- a/tools/perf/util/cs-etm.c
+++ b/tools/perf/util/cs-etm.c
@@ -2679,6 +2679,161 @@ static u64 *cs_etm__create_meta_blk(u64 *buff_in, int *buff_in_offset,
 	return metadata;
 }
 
+/**
+ * Puts a fragment of an auxtrace buffer into the auxtrace queues based
+ * on the bounds of aux_event, if it matches with the buffer that's at
+ * file_offset.
+ *
+ * Normally, whole auxtrace buffers would be added to the queue. But we
+ * want to reset the decoder for every PERF_RECORD_AUX event, and the decoder
+ * is reset across each buffer, so splitting the buffers up in advance has
+ * the same effect.
+ */
+static int cs_etm__queue_aux_fragment(struct perf_session *session,
+						off_t file_offset, size_t sz,
+						struct perf_record_aux *aux_event,
+						struct perf_sample *sample)
+{
+	int err;
+	char buf[PERF_SAMPLE_MAX_SIZE];
+	union perf_event *auxtrace_event_union;
+	struct perf_record_auxtrace *auxtrace_event;
+	union perf_event auxtrace_fragment;
+	bool matchesCpuPid;
+	__u64 aux_offset;
+	struct cs_etm_auxtrace *etm = container_of(session->auxtrace,
+						   struct cs_etm_auxtrace,
+						   auxtrace);
+
+	/*
+	 * There should be a PERF_RECORD_AUXTRACE event at the file_offset that we got
+	 * from looping through the auxtrace index.
+	 */
+	err = perf_session__peek_event(session, file_offset, buf,
+				       PERF_SAMPLE_MAX_SIZE, &auxtrace_event_union, NULL);
+	if (err)
+		return err;
+	auxtrace_event = &auxtrace_event_union->auxtrace;
+	if (auxtrace_event->header.type != PERF_RECORD_AUXTRACE)
+		return -EINVAL;
+
+	if (auxtrace_event->header.size < sizeof(struct perf_record_auxtrace) ||
+		auxtrace_event->header.size != sz) {
+		return -EINVAL;
+	}
+
+	/*
+	 * In per-thread mode, CPU is set to -1, but TID will be set instead.
+	 * See auxtrace_mmap_params__set_idx()
+	 */
+	if (auxtrace_event->cpu == (__u32) -1)
+		matchesCpuPid = auxtrace_event->tid == sample->tid;
+	else
+		matchesCpuPid = auxtrace_event->cpu == sample->cpu;
+
+	/*
+	 * In snapshot/overwrite mode, the head points to the end of the buffer so aux_offset needs
+	 * to have the size subtracted so it points to the beginning as in normal mode.
+	 */
+	if (aux_event->flags & PERF_AUX_FLAG_OVERWRITE)
+		aux_offset = aux_event->aux_offset - aux_event->aux_size;
+	else
+		aux_offset = aux_event->aux_offset;
+
+	if (matchesCpuPid &&
+	    aux_offset >= auxtrace_event->offset &&
+	    aux_offset + aux_event->aux_size <= auxtrace_event->offset + auxtrace_event->size) {
+		/*
+		 * If this AUX event was inside this buffer somewhere, create a new auxtrace event
+		 * based on the sizes of the aux event, and queue that fragment.
+		 */
+		auxtrace_fragment.auxtrace = *auxtrace_event;
+		auxtrace_fragment.auxtrace.size = aux_event->aux_size;
+		auxtrace_fragment.auxtrace.offset = aux_offset;
+		file_offset += aux_offset - auxtrace_event->offset + auxtrace_event->header.size;
+		return auxtrace_queues__add_event(&etm->queues,
+					       session,
+					       &auxtrace_fragment,
+					       file_offset,
+					       NULL);
+	}
+
+	/* Wasn't inside this buffer, but there were no parse errors. 1 == 'not found' */
+	return 1;
+}
+
+static int cs_etm__queue_aux_records_cb(struct perf_session *session,
+			     union perf_event *event, u64 offset __maybe_unused,
+			     void *data __maybe_unused)
+{
+	struct perf_sample sample;
+	int ret;
+	struct auxtrace_index_entry *ent;
+	struct auxtrace_index *auxtrace_index;
+	struct evsel *evsel;
+	size_t i;
+	struct cs_etm_auxtrace *etm = container_of(session->auxtrace,
+						   struct cs_etm_auxtrace,
+						   auxtrace);
+
+	/* Don't care about any other events, we're only queuing buffers for AUX events */
+	if (event->header.type != PERF_RECORD_AUX)
+		return 0;
+
+	if (event->header.size < sizeof(struct perf_record_aux))
+		return -EINVAL;
+
+	/* Truncated Aux records can have 0 size and shouldn't result in anything being queued. */
+	if (!event->aux.aux_size)
+		return 0;
+
+	/*
+	 * Parse the sample, we need the sample_id_all data that comes after the event so that the
+	 * CPU or PID can be matched to an AUXTRACE buffer's CPU or PID.
+	 */
+	evsel = evlist__event2evsel(session->evlist, event);
+	evsel__parse_sample(evsel, event, &sample);
+
+	/*
+	 * Loop throuch the auxtrace index to find the buffer that matches up with this
+	 * aux event.
+	 * TODO: binary search?
+	 */
+	list_for_each_entry(auxtrace_index, &session->auxtrace_index, list) {
+		for (i = 0; i < auxtrace_index->nr; i++) {
+			ent = &auxtrace_index->entries[i];
+			ret = cs_etm__queue_aux_fragment(session, ent->file_offset,
+							 ent->sz, &event->aux, &sample);
+			if (!ret) {
+				etm->data_queued = true;
+				return 0;
+			} else if (ret < 0) {
+				/*
+				 * Anything other than 1 is an error. Positive values are 'not
+				 * found' we just want to go onto the next one in that case.
+				 */
+				return ret;
+			}
+		}
+	}
+
+	/*
+	 * We would get here if there are no entries in the index (either no auxtrace buffers or
+	 * no index). Fail silently as there is the possibility of queueing them in
+	 * cs_etm__process_auxtrace_event() if etm->data_queued is still false;
+	 *
+	 * In that scenario, buffers will not be split by AUX records.
+	 */
+	return 0;
+}
+
+static int cs_etm__queue_aux_records(struct perf_session *session)
+{
+	return perf_session__peek_events(session, session->header.data_offset,
+					 session->header.data_size,
+					 cs_etm__queue_aux_records_cb, NULL);
+}
+
 int cs_etm__process_auxtrace_info(union perf_event *event,
 				  struct perf_session *session)
 {
@@ -2879,12 +3034,10 @@ int cs_etm__process_auxtrace_info(union perf_event *event,
 	if (err)
 		goto err_delete_thread;
 
-	err = auxtrace_queues__process_index(&etm->queues, session);
+	err = cs_etm__queue_aux_records(session);
 	if (err)
 		goto err_delete_thread;
 
-	etm->data_queued = etm->queues.populated;
-
 	return 0;
 
 err_delete_thread:
-- 
2.28.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ