lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <64b0859f-aad3-43fa-4e4c-81614d0c75e4@linux.intel.com>
Date:   Mon, 1 Mar 2021 14:16:04 +0300
From:   "Bayduraev, Alexey V" <alexey.v.bayduraev@...ux.intel.com>
To:     Namhyung Kim <namhyung@...nel.org>,
        Alexei Budankov <abudankov@...wei.com>
Cc:     Arnaldo Carvalho de Melo <acme@...nel.org>,
        Jiri Olsa <jolsa@...hat.com>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Andi Kleen <ak@...ux.intel.com>,
        Adrian Hunter <adrian.hunter@...el.com>,
        Alexander Antonov <alexander.antonov@...ux.intel.com>
Subject: Re: [PATCH v3 07/12] perf record: init data file at mmap buffer
 object

Hi,

On 20.11.2020 13:49, Namhyung Kim wrote:
> On Mon, Nov 16, 2020 at 03:19:41PM +0300, Alexey Budankov wrote:

<SNIP>

>>  
>> @@ -1400,8 +1417,12 @@ static int record__mmap_read_evlist(struct record *rec, struct evlist *evlist,
>>  	/*
>>  	 * Mark the round finished in case we wrote
>>  	 * at least one event.
>> +	 *
>> +	 * No need for round events in directory mode,
>> +	 * because per-cpu maps and files have data
>> +	 * sorted by kernel.
> 
> But it's not just for single cpu since task can migrate so we need to
> look at other cpu's data too.  Thus we use the ordered events queue
> and round events help to determine when to flush the data.  Without
> the round events, it'd consume huge amount of memory during report.
> 
> If we separate tracking records and process them first, we should be
> able to process samples immediately without sorting them in the
> ordered event queue.  This will save both cpu cycles and memory
> footprint significantly IMHO.
> 
> Thanks,
> Namhyung
> 

As far as I understand, to split tracing records (FORK/MMAP/COMM) into
a separate file, we need to implement a runtime trace decoder on the
perf-record side to recognize such tracing records coming from the kernel.
Is that what you mean?

IMHO this can be tricky to implement and adds some overhead that can lead
to possible data loss. Do you have any other ideas how to optimize memory
consumption on perf-report side without a runtime trace decoder?
Maybe "round events" would somehow help in directory mode?

BTW In our tool we use another approach: two-pass trace file loading. 
The first loads tracing records, the second loads samples. 

Thanks,
Alexey

> 
>>  	 */
>> -	if (bytes_written != rec->bytes_written)
>> +	if (!record__threads_enabled(rec) && bytes_written != rec->bytes_written)
>>  		rc = record__write(rec, NULL, &finished_round_event, sizeof(finished_round_event));
>>  
>>  	if (overwrite)
>> @@ -1514,7 +1535,9 @@ static void record__init_features(struct record *rec)
>>  	if (!rec->opts.use_clockid)
>>  		perf_header__clear_feat(&session->header, HEADER_CLOCK_DATA);
>>  
>> -	perf_header__clear_feat(&session->header, HEADER_DIR_FORMAT);
>> +	if (!record__threads_enabled(rec))
>> +		perf_header__clear_feat(&session->header, HEADER_DIR_FORMAT);
>> +
>>  	if (!record__comp_enabled(rec))
>>  		perf_header__clear_feat(&session->header, HEADER_COMPRESSED);
>>  
>> @@ -1525,15 +1548,21 @@ static void
>>  record__finish_output(struct record *rec)
>>  {
>>  	struct perf_data *data = &rec->data;
>> -	int fd = perf_data__fd(data);
>> +	int i, fd = perf_data__fd(data);
>>  
>>  	if (data->is_pipe)
>>  		return;
>>  
>>  	rec->session->header.data_size += rec->bytes_written;
>>  	data->file.size = lseek(perf_data__fd(data), 0, SEEK_CUR);
>> +	if (record__threads_enabled(rec)) {
>> +		for (i = 0; i < data->dir.nr; i++)
>> +			data->dir.files[i].size = lseek(data->dir.files[i].fd, 0, SEEK_CUR);
>> +	}
>>  
>>  	if (!rec->no_buildid) {
>> +		/* this will be recalculated during process_buildids() */
>> +		rec->samples = 0;
>>  		process_buildids(rec);
>>  
>>  		if (rec->buildid_all)
>> @@ -2438,8 +2467,6 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
>>  		status = err;
>>  
>>  	record__synthesize(rec, true);
>> -	/* this will be recalculated during process_buildids() */
>> -	rec->samples = 0;
>>  
>>  	if (!err) {
>>  		if (!rec->timestamp_filename) {
>> @@ -3179,7 +3206,7 @@ int cmd_record(int argc, const char **argv)
>>  
>>  	}
>>  
>> -	if (rec->opts.kcore)
>> +	if (rec->opts.kcore || record__threads_enabled(rec))
>>  		rec->data.is_dir = true;
>>  
>>  	if (rec->opts.comp_level != 0) {
>> diff --git a/tools/perf/util/record.h b/tools/perf/util/record.h
>> index 266760ac9143..9c13a39cc58f 100644
>> --- a/tools/perf/util/record.h
>> +++ b/tools/perf/util/record.h
>> @@ -74,6 +74,7 @@ struct record_opts {
>>  	int	      ctl_fd;
>>  	int	      ctl_fd_ack;
>>  	bool	      ctl_fd_close;
>> +	int	      threads_spec;
>>  };
>>  
>>  extern const char * const *record_usage;
>> -- 
>> 2.24.1
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ